text
stringlengths
11
320k
source
stringlengths
26
161
Primary is a term used in organic chemistry to classify various types of compounds (e.g. alcohols , alkyl halides, amines) or reactive intermediates (e.g. alkyl radicals, carbocations ). Primary central atoms compared with secondary , tertiary and quaternary central atoms.
https://en.wikipedia.org/wiki/Primary_(chemistry)
A primary alcohol is an alcohol in which the hydroxy group is bonded to a primary carbon atom. It can also be defined as a molecule containing a “–CH 2 OH” group. [ 1 ] In contrast, a secondary alcohol has a formula “–CHROH” and a tertiary alcohol has a formula “–CR 2 OH”, where “R” indicates a carbon-containing group. Examples of primary alcohols include ethanol , 1-propanol , and 1-butanol . Methanol is also generally regarded as a primary alcohol, [ 2 ] [ 3 ] including by the 1911 edition of the Encyclopædia Britannica . [ 4 ] This organic chemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Primary_alcohol
Primary and secondary antibodies are two groups of antibodies that are classified based on whether they bind to antigens or proteins directly or target another (primary) antibody that, in turn, is bound to an antigen or protein . A primary antibody can be very useful for the detection of biomarkers for diseases such as cancer, diabetes, Parkinson’s and Alzheimer’s disease and they are used for the study of absorption, distribution, metabolism, and excretion (ADME) and multi-drug resistance (MDR) of therapeutic agents. Secondary antibodies provide signal detection and amplification along with extending the utility of an antibody through conjugation to proteins. [ 1 ] Secondary antibodies are especially efficient in immunolabeling . Secondary antibodies bind to primary antibodies, which are directly bound to the target antigen(s). In immunolabeling, the primary antibody's Fab domain binds to an antigen and exposes its Fc domain to secondary antibody. Then, the secondary antibody's Fab domain binds to the primary antibody's Fc domain. Since the Fc domain is constant within the same animal class, only one type of secondary antibody is required to bind to many types of primary antibodies. This reduces the cost by labeling only one type of secondary antibody, rather than labeling various types of primary antibodies. Secondary antibodies help increase sensitivity and signal amplification due to multiple secondary antibodies binding to a primary antibody. [ 2 ] Whole Immunoglobulin molecule secondary antibodies are the most commonly used format, but these can be enzymatically processed to enable assay refinement. F(ab') 2 fragments are generated by pepsin digestion to remove most of the Fc fragment, this avoids recognition by Fc receptors on live cells, or to Protein A or Protein G. [ 3 ] Papain digestion generates Fab fragments, which removes the entire Fc fragment including the hinge region, yielding two monovalent Fab moieties. They can be used to block endogenous immunoglobulins on cells, tissues or other surfaces, and to block the exposed immunoglobulins in multiple labeling experiments using primary antibodies from the same species. [ 4 ] Secondary antibodies can be conjugated to enzymes such as horseradish peroxidase (HRP) or alkaline phosphatase (AP); or fluorescent dyes such as fluorescein isothiocyanate (FITC), rhodamine derivatives, Alexa Fluor dyes ; or other molecules to be used in various applications. Secondary antibodies are used in many biochemical assays [ 5 ] including:
https://en.wikipedia.org/wiki/Primary_and_secondary_antibodies
In organic chemistry , a primary carbon is a carbon atom which is bound to only one other carbon atom. [ 1 ] It is thus at the end of a carbon chain . In case of an alkane , three hydrogen atoms are bound to a primary carbon (see propane in the figure on the right). A hydrogen atom could also be replaced by a hydroxy group ( −OH ), which would make the molecule a primary alcohol . [ 2 ]
https://en.wikipedia.org/wiki/Primary_carbon
Primary energy ( PE ) is the energy found in nature that has not been subjected to any human engineered conversion process. It encompasses energy contained in raw fuels and other forms of energy, including waste, received as input to a system . Primary energy can be non-renewable or renewable . Total primary energy supply ( TPES ) is the sum of production and imports, plus or minus stock changes, minus exports and international bunker storage. [ 3 ] The International Recommendations for Energy Statistics (IRES) prefers total energy supply ( TES ) to refer to this indicator. [ 4 ] These expressions are often used to describe the total energy supply of a national territory. Secondary energy is a carrier of energy, such as electricity. These are produced by conversion from a primary energy source. Primary energy is used as a measure in energy statistics in the compilation of energy balances , [ 5 ] as well as in the field of energetics. In energetics, a primary energy source (PES) refers to the energy forms required by the energy sector to generate the supply of energy carriers used by human society. [ 6 ] Primary energy only counts raw energy and not usable energy and fails to account well for energy losses, particularly the large losses in thermal sources. It therefore generally grossly overcounts the usefulness of thermal renewable energy sources and by comparison undercounts sources like renewables that produce secondary energy. Primary energy sources should not be confused with the energy system components (or conversion processes) through which they are converted into energy carriers. Primary energy sources are transformed in energy conversion processes to more convenient forms of energy that can directly be used by society, such as electrical energy , refined fuels , or synthetic fuels such as hydrogen fuel . In the field of energetics , these forms are called energy carriers and correspond to the concept of "secondary energy" in energy statistics. Energy carriers are energy forms which have been transformed from primary energy sources. Electricity is one of the most common energy carriers, being transformed from various primary energy sources such as coal, oil, natural gas, and wind. Electricity is particularly useful since it has low entropy (is highly ordered) and so can be converted into other forms of energy very efficiently. District heating is another example of secondary energy. [ 8 ] According to the laws of thermodynamics , primary energy sources cannot be produced. They must be available to society to enable the production of energy carriers. [ 6 ] Conversion efficiency varies. For thermal energy, electricity and mechanical energy production is limited by Carnot's theorem , and generates a lot of waste heat . Other non-thermal conversions can be more efficient. For example, while wind turbines do not capture all of the wind's energy, they have a high conversion efficiency and generate very little waste heat since wind energy is low entropy. In principle solar photovoltaic conversions could be very efficient, but current conversion can only be done well for narrow ranges of wavelength, whereas solar thermal is also subject to Carnot efficiency limits. Hydroelectric power is also very ordered, and converted very efficiently. The amount of usable energy is the exergy of a system. Site energy is the term used in North America for the amount of end-use energy of all forms consumed at a specified location. This can be a mix of primary energy (such as natural gas burned at the site) and secondary energy (such as electricity). Site energy is measured at the campus, building, or sub-building level and is the basis for energy charges on utility bills. [ 9 ] Source energy, in contrast, is the term used in North America for the amount of primary energy consumed in order to provide a facility's site energy. It is always greater than the site energy, as it includes all site energy and adds to it the energy lost during transmission, delivery, and conversion. [ 10 ] While source or primary energy provides a more complete picture of energy consumption, it cannot be measured directly and must be calculated using conversion factors from site energy measurements. [ 9 ] For electricity, a typical value is three units of source energy for one unit of site energy. [ 11 ] However, this can vary considerably depending on factors such as the primary energy source or fuel type, the type of power plant, and the transmission infrastructure. One full set of conversion factors is available as technical reference from Energy STAR . [ 12 ] Either site or source energy can be an appropriate metric when comparing or analyzing energy use of different facilities. The U.S Energy Information Administration , for example, uses primary (source) energy for its energy overviews [ 13 ] but site energy for its Commercial Building Energy Consumption Survey [ 14 ] and Residential Building Energy Consumption Survey. [ 15 ] The US Environmental Protection Agency 's Energy STAR program recommends using source energy, [ 16 ] and the US Department of Energy uses site energy in its definition of a zero net energy building . [ 17 ] Where primary energy is used to describe fossil fuels , the embodied energy of the fuel is available as thermal energy and around two thirds is typically lost in conversion to electrical or mechanical energy. There are very much less significant conversion losses when hydroelectricity, wind and solar power produce electricity, but today's UN conventions on energy statistics counts the electricity made from hydroelectricity, wind and solar as the primary energy itself for these sources. One consequence of employing primary energy as an energy metric is that the contribution of hydro, wind and solar energy is under reported compared to fossil energy sources, and there is hence an international debate on how to count energy from non-thermal renewables, with many estimates having them undercounted by a factor of about three. [ 18 ] The false notion that all primary energy from thermal fossil fuel sources has to be replaced by an equivalent amount of non-thermal renewables (which is not necessary as conversion losses do not need to be replaced) has been termed the "primary energy fallacy".
https://en.wikipedia.org/wiki/Primary_energy
Primary growth in plants is growth that takes place from the tips of roots or shoots. It leads to lengthening of roots and stems and sets the stage for organ formation. It is distinguished from secondary growth that leads to widening. Plant growth takes place in well defined plant locations. Specifically, the cell division and differentiation needed for growth occurs in specialized structures called meristems . [ 1 ] [ 2 ] These consist of undifferentiated cells (meristematic cells) capable of cell division . Cells in the meristem can develop into all the other tissues and organs that occur in plants. These cells continue to divide until they differentiate and then lose the ability to divide. Thus, the meristems produce all the cells used for plant growth and function. [ 3 ] At the tip of each stem and root, an apical meristem adds cells to their length, resulting in the elongation of both. Examples of primary growth are the rapid lengthening growth of seedlings after they emerge from the soil and the penetration of roots deep into the soil. [ 4 ] Furthermore, all plant organs arise ultimately from cell divisions in the apical meristems, followed by cell expansion and differentiation. [ 1 ] In contrast, a growth process that involves thickening of stems takes place within lateral meristems that are located throughout the length of the stems. The lateral meristems of larger plants also extend into the roots. This thickening is secondary growth and is needed to give mechanical support and stability to the plant. [ 4 ] The functions of a plant's growing tips – its apical (or primary) meristems – include: lengthening through cell division and elongation; organising the development of leaves along the stem; creating platforms for the eventual development of branches along the stem; [ 4 ] laying the groundwork for organ formation by providing a stock of undifferentiated or incompletely differentiated cells [ 5 ] that later develop into fully differentiated cells, thereby ultimately allowing the "spatial deployment off both arial and underground organs." [ 1 ] In stems, primary growth occurs in the apical bud (the one on the tips of stems) and not in axillary buds (primary buds at locations of side branching). This results from apical dominance , which prevents the growth of axillary buds that form along the sides of branches and stems. Auxin (a plant hormone) produced in the apical bud inhibits the growth of axillary buds. However, if the apical bud is removed or damaged, the axillary buds begin to grow. [ 4 ] These axillary buds have developed through evolution as a form of botanical risk management – they give the plant a means to continue to grow in the face of environmental hazards. When gardeners prune the tops of branches in order to obtain a bushier plant, they are using this feature of primary growth in plants. By eliminating the apical bud, they force the axillary buds to start growing, causing the plant to emit new stems. [ 4 ] [ 5 ] Evolution has provided plants with a way of dealing with the injuries created as the root system burrows its way through soil that contain objects that injure the root buds. The tip of the root is protected by a root cap that is continuously sloughed off and replaced because it gets damaged as it pushes through the soil. Cellular division via mitosis takes place at the very tip of the root cap. The newly created cells then begin a stretching process of cellular elongation, thereby adding length to the root. Finally, the cells undergo a process of cellular differentiation that converts them into the components of dermal, vascular or ground tissues . [ 5 ] [ 6 ] By laying the groundwork for organ differentiation and because of its role in plant growth, primary growth – coordinated with the secondary growth process – largely determines the morphology and functioning of plants. The question of how the biochemical pathways underpinning this process are regulated and coordinated is the subject of ongoing research. This research sheds light on the nature and timing of gene expression and of hormonal regulation in this process, though their roles are still not completely understood. [ 1 ] [ 2 ]
https://en.wikipedia.org/wiki/Primary_growth
A primary metabolite is a kind of metabolite that is directly involved in normal growth, development, and reproduction. It usually performs a physiological function in the organism (i.e. an intrinsic function). A primary metabolite is typically present in many organisms or cells. It is also referred to as a central metabolite, which has an even more restricted meaning (present in any autonomously growing cell or organism). Some common examples of primary metabolites include: lactic acid , and certain amino acids . Note that primary metabolites do not show any pharmacological actions or effects. Plant growth regulators may be classified as both primary and secondary metabolites due to their role in plant growth and development. [ 1 ] [ 2 ] [ 3 ] [ 4 ] Some of them are intermediates between primary and secondary metabolism. [ 5 ]
https://en.wikipedia.org/wiki/Primary_metabolite
Primary nutritional groups are groups of organisms , divided in relation to the nutrition mode according to the sources of energy and carbon, needed for living, growth and reproduction. The sources of energy can be light or chemical compounds; the sources of carbon can be of organic or inorganic origin. [ 1 ] The terms aerobic respiration , anaerobic respiration and fermentation ( substrate-level phosphorylation ) do not refer to primary nutritional groups, but simply reflect the different use of possible electron acceptors in particular organisms, such as O 2 in aerobic respiration, or nitrate ( NO − 3 ), sulfate ( SO 2− 4 ) or fumarate in anaerobic respiration, or various metabolic intermediates in fermentation. Phototrophs absorb light in photoreceptors and transform it into chemical energy. Chemotrophs release chemical energy. The freed energy is stored as potential energy in ATP , carbohydrates , or proteins . Eventually, the energy is used for life processes such as moving, growth and reproduction. Plants and some bacteria can alternate between phototrophy and chemotrophy, depending on the availability of light. Organotrophs use organic compounds as electron/hydrogen donors . Lithotrophs use inorganic compounds as electron/hydrogen donors. The electrons or hydrogen atoms from reducing equivalents (electron donors) are needed by both phototrophs and chemotrophs in reduction-oxidation reactions that transfer energy in the anabolic processes of ATP synthesis (in heterotrophs) or biosynthesis (in autotrophs). The electron or hydrogen donors are taken up from the environment. Organotrophic organisms are often also heterotrophic, using organic compounds as sources of both electrons and carbon. Similarly, lithotrophic organisms are often also autotrophic, using inorganic sources of electrons and CO 2 as their inorganic carbon source. Some lithotrophic bacteria can utilize diverse sources of electrons, depending on the availability of possible donors. The organic or inorganic substances (e.g., oxygen) used as electron acceptors needed in the catabolic processes of aerobic or anaerobic respiration and fermentation are not taken into account here. For example, plants are lithotrophs because they use water as their electron donor for the electron transport chain across the thylakoid membrane. Animals are organotrophs because they use organic compounds as electron donors to synthesize ATP (plants also do this, but this is not taken into account). Both use oxygen in respiration as electron acceptor, but this character is not used to define them as lithotrophs. Heterotrophs metabolize organic compounds to obtain carbon for growth and development. Autotrophs use carbon dioxide (CO 2 ) as their source of carbon. A chemoorganoheterotrophic organism is one that requires organic substrates to get its carbon for growth and development, and that obtains its energy from the decomposition of an organic compound. This group of organisms may be further subdivided according to what kind of organic substrate and compound they use. Decomposers are examples of chemoorganoheterotrophs which obtain carbon and electrons or hydrogen from dead organic matter. Herbivores and carnivores are examples of organisms that obtain carbon and electrons or hydrogen from living organic matter. Chemoorganotrophs are organisms which use the chemical energy in organic compounds as their energy source and obtain electrons or hydrogen from the organic compounds, including sugars (i.e. glucose ), fats and proteins. [ 2 ] Chemoheterotrophs also obtain the carbon atoms that they need for cellular function from these organic compounds. All animals are chemoheterotrophs (meaning they oxidize chemical compounds as a source of energy and carbon), as are fungi , protozoa , and some bacteria . The important differentiation amongst this group is that chemoorganotrophs oxidize only organic compounds while chemolithotrophs instead use oxidation of inorganic compounds as a source of energy. [ 3 ] The following table gives some examples for each nutritional group: [ 4 ] [ 5 ] [ 6 ] [ 7 ] *Some authors use -hydro- when the source is water. The common final part -troph is from Ancient Greek τροφή trophḗ "nutrition". Some, usually unicellular, organisms can switch between different metabolic modes, for example between photoautotrophy, photoheterotrophy, and chemoheterotrophy in Chroococcales . [ 13 ] Rhodopseudomonas palustris – another example – can grow with or without oxygen , use either light, inorganic or organic compounds for energy. [ 14 ] Such mixotrophic organisms may dominate their habitat , due to their capability to use more resources than either photoautotrophic or organoheterotrophic organisms. [ 15 ] All sorts of combinations may exist in nature, but some are more common than others. For example, most plants are photolithoautotrophic , since they use light as an energy source, water as electron donor, and CO 2 as a carbon source. All animals and fungi are chemoorganoheterotrophic , since they use organic substances both as chemical energy sources and as electron/hydrogen donors and carbon sources. Some eukaryotic microorganisms, however, are not limited to just one nutritional mode. For example, some algae live photoautotrophically in the light, but shift to chemoorganoheterotrophy in the dark. Even higher plants retained their ability to respire heterotrophically on starch at night which had been synthesised phototrophically during the day. Prokaryotes show a great diversity of nutritional categories . [ 16 ] For example, cyanobacteria and many purple sulfur bacteria can be photolithoautotrophic , using light for energy, H 2 O or sulfide as electron/hydrogen donors, and CO 2 as carbon source, whereas green non-sulfur bacteria can be photoorganoheterotrophic , using organic molecules as both electron/hydrogen donors and carbon sources. [ 8 ] [ 16 ] Many bacteria are chemoorganoheterotrophic , using organic molecules as energy, electron/hydrogen and carbon sources. [ 8 ] Some bacteria are limited to only one nutritional group, whereas others are facultative and switch from one mode to the other, depending on the nutrient sources available. [ 16 ] Sulfur-oxidizing , iron , and anammox bacteria as well as methanogens are chemolithoautotrophs , using inorganic energy, electron, and carbon sources. Chemolithoheterotrophs are rare because heterotrophy implies the availability of organic substrates, which can also serve as easy electron sources, making lithotrophy unnecessary. Photoorganoautotrophs are uncommon since their organic source of electrons/hydrogens would provide an easy carbon source, resulting in heterotrophy. Synthetic biology efforts enabled the transformation of the trophic mode of two model microorganisms from heterotrophy to chemoorganoautotrophy:
https://en.wikipedia.org/wiki/Primary_nutritional_groups
An autotroph is an organism that can convert abiotic sources of energy into energy stored in organic compounds , which can be used by other organisms . Autotrophs produce complex organic compounds (such as carbohydrates , fats , and proteins ) using carbon from simple substances such as carbon dioxide, [ 1 ] generally using energy from light or inorganic chemical reactions . [ 2 ] Autotrophs do not need a living source of carbon or energy and are the producers in a food chain, such as plants on land or algae in water. Autotrophs can reduce carbon dioxide to make organic compounds for biosynthesis and as stored chemical fuel. Most autotrophs use water as the reducing agent , but some can use other hydrogen compounds such as hydrogen sulfide . The primary producers can convert the energy in the light ( phototroph and photoautotroph ) or the energy in inorganic chemical compounds ( chemotrophs or chemolithotrophs ) to build organic molecules , which is usually accumulated in the form of biomass and will be used as carbon and energy source by other organisms (e.g. heterotrophs and mixotrophs ). The photoautotrophs are the main primary producers, converting the energy of the light into chemical energy through photosynthesis , ultimately building organic molecules from carbon dioxide , an inorganic carbon source . [ 3 ] Examples of chemolithotrophs are some archaea and bacteria (unicellular organisms) that produce biomass from the oxidation of inorganic chemical compounds, these organisms are called chemoautotrophs , and are frequently found in hydrothermal vents in the deep ocean. Primary producers are at the lowest trophic level , and are the reasons why Earth sustains life to this day. [ 4 ] Autotrophs use a portion of the ATP produced during photosynthesis or the oxidation of chemical compounds to reduce NADP + to NADPH to form organic compounds. [ 5 ] Most chemoautotrophs are lithotrophs , using inorganic electron donors such as hydrogen sulfide, hydrogen gas , elemental sulfur , ammonium and ferrous oxide as reducing agents and hydrogen sources for biosynthesis and chemical energy release. Chemolithoautotrophs are microorganisms that synthesize energy through the oxidation of inorganic compounds. [ 6 ] They can sustain themselves entirely on atmospheric CO₂ and inorganic chemicals without the need for light or organic compounds. They enzymatically catalyze redox reactions using mineral substrates to generate ATP energy. [ 7 ] These substrates primarily include hydrogen, iron, nitrogen, and sulfur. Its ecological niche is often specialized to extreme environments, including deep marine hydrothermal vents, stratified sediment, and acidic hot springs. [ 8 ] Their metabolic processes play a key role in supporting microbial food webs as primary producers, and biogeochemical fluxes. The term autotroph was coined by the German botanist Albert Bernhard Frank in 1892. [ 9 ] [ 10 ] It stems from the ancient Greek word τροφή ( trophḗ ), meaning "nourishment" or "food". The first autotrophic organisms likely evolved early in the Archean but proliferated across Earth's Great Oxidation Event with an increase to the rate of oxygenic photosynthesis by cyanobacteria . [ 11 ] Photoautotrophs evolved from heterotrophic bacteria by developing photosynthesis . The earliest photosynthetic bacteria used hydrogen sulphide . Due to the scarcity of hydrogen sulphide, some photosynthetic bacteria evolved to use water in photosynthesis, leading to cyanobacteria . [ 12 ] Some organisms rely on organic compounds as a source of carbon , but are able to use light or inorganic compounds as a source of energy. Such organisms are mixotrophs . An organism that obtains carbon from organic compounds but obtains energy from light is called a photoheterotroph , while an organism that obtains carbon from organic compounds and energy from the oxidation of inorganic compounds is termed a chemolithoheterotroph . Evidence suggests that some fungi may also obtain energy from ionizing radiation : Such radiotrophic fungi were found growing inside a reactor of the Chernobyl nuclear power plant . [ 13 ] There are many different types of autotrophs in Earth's ecosystems. Lichens located in tundra climates are an exceptional example of a primary producer that, by mutualistic symbiosis, combines photosynthesis by algae (or additionally nitrogen fixation by cyanobacteria) with the protection of a decomposer fungus . As there are many examples of primary producers, two dominant types are coral and one of the many types of brown algae, kelp. [ 3 ] Gross primary production occurs by photosynthesis. This is the main way that primary producers get energy and make it available to other forms of life. Plants, many corals (by means of intracellular algae), some bacteria ( cyanobacteria ), and algae do this. During photosynthesis, primary producers receive energy from the sun and use it to produce sugar and oxygen. Without primary producers, organisms that are capable of producing energy on their own, the biological systems of Earth would be unable to sustain themselves. [ 3 ] Plants, along with other primary producers, produce the energy that other living beings consume, and the oxygen that they breathe. [ 3 ] It is thought that the first organisms on Earth were primary producers located on the ocean floor. [ 3 ] Autotrophs are fundamental to the food chains of all ecosystems in the world. They take energy from the environment in the form of sunlight or inorganic chemicals and use it to create fuel molecules such as carbohydrates. This mechanism is called primary production . Other organisms, called heterotrophs , take in autotrophs as food to carry out functions necessary for their life. Thus, heterotrophs – all animals , almost all fungi , as well as most bacteria and protozoa – depend on autotrophs, or primary producers , for the raw materials and fuel they need. Heterotrophs obtain energy by breaking down carbohydrates or oxidizing organic molecules (carbohydrates, fats, and proteins) obtained in food. Carnivorous organisms rely on autotrophs indirectly, as the nutrients obtained from their heterotrophic prey come from autotrophs they have consumed. Most ecosystems are supported by the autotrophic primary production of plants and cyanobacteria that capture photons initially released by the sun . Plants can only use a fraction (approximately 1%) of this energy for photosynthesis . [ 14 ] The process of photosynthesis splits a water molecule (H 2 O), releasing oxygen (O 2 ) into the atmosphere, and reducing carbon dioxide (CO 2 ) to release the hydrogen atoms that fuel the metabolic process of primary production . Plants convert and store the energy of the photons into the chemical bonds of simple sugars during photosynthesis. These plant sugars are polymerized for storage as long-chain carbohydrates , such as starch and cellulose; glucose is also used to make fats and proteins . When autotrophs are eaten by heterotrophs , i.e., consumers such as animals, the carbohydrates , fats , and proteins contained in them become energy sources for the heterotrophs . [ 15 ] Proteins can be made using nitrates , sulfates , and phosphates in the soil. [ 16 ] [ 17 ] Aquatic algae are a significant contributor to food webs in tropical rivers and streams. This is displayed by net primary production, a fundamental ecological process that reflects the amount of carbon that is synthesized within an ecosystem. This carbon ultimately becomes available to consumers. Net primary production displays that the rates of in-stream primary production in tropical regions are at least an order of magnitude greater than in similar temperate systems. [ 18 ] Researchers believe that the first cellular lifeforms were not heterotrophs as they would rely upon autotrophs since organic substrates delivered from space were either too heterogeneous to support microbial growth or too reduced to be fermented. Instead, they consider that the first cells were autotrophs. [ 19 ] These autotrophs might have been thermophilic and anaerobic chemolithoautotrophs that lived at deep sea alkaline hydrothermal vents. This view is supported by phylogenetic evidence – the physiology and habitat of the last universal common ancestor (LUCA) is inferred to have also been a thermophilic anaerobe with a Wood-Ljungdahl pathway, its biochemistry was replete with FeS clusters and radical reaction mechanisms. It was dependent upon Fe, H 2 , and CO 2 . [ 19 ] [ 20 ] The high concentration of K + present within the cytosol of most life forms suggests that early cellular life had Na + /H + antiporters or possibly symporters. [ 21 ] Autotrophs possibly evolved into heterotrophs when they were at low H 2 partial pressures where the first form of heterotrophy were likely amino acid and clostridial type purine fermentations. [ 22 ] It has been suggested that photosynthesis emerged in the presence of faint near infrared light emitted by hydrothermal vents. The first photochemically active pigments are then thought to be Zn-tetrapyrroles. [ 23 ]
https://en.wikipedia.org/wiki/Primary_producer
In ecology , primary production is the synthesis of organic compounds from atmospheric or aqueous carbon dioxide . It principally occurs through the process of photosynthesis , which uses light as its source of energy, but it also occurs through chemosynthesis , which uses the oxidation or reduction of inorganic chemical compounds as its source of energy. Almost all life on Earth relies directly or indirectly on primary production. The organisms responsible for primary production are known as primary producers or autotrophs , and form the base of the food chain . In terrestrial ecoregions , these are mainly plants , while in aquatic ecoregions algae predominate in this role. Ecologists distinguish primary production as either net or gross , the former accounting for losses to processes such as cellular respiration , the latter not. Primary production is the production of chemical energy , in organic compounds by living organisms . The main source of such energy is sunlight , but a minute fraction of primary production is driven by lithotrophic organisms, using the chemical energy of inorganic molecules. Regardless of its source, this energy is used to synthesize complex organic molecules from simpler, inorganic, compounds; carbon dioxide (CO 2 ) and water (H 2 O) are both typical and fundamental examples. The following two equations are simplified representations of photosynthesis (top) and (one form of) chemosynthesis (bottom): In each of these cases, the end point is a polymer of reduced carbohydrate , (CH 2 O) n , typically molecules such as glucose (or other sugars ). These relatively simple molecules may then be used to further synthesize more complicated molecules, including proteins , complex carbohydrates , lipids , and nucleic acids , or be respired to perform work . Consumption of primary producers, by heterotrophic organisms, such as animals , then transfers these organic molecules (and thus the energy stored within them) upwards in the food web , fueling all of the Earth 's living systems. [ citation needed ] Gross primary production (GPP) is the amount of chemical energy, typically expressed as carbon biomass , that primary producers create in a given length of time. Some fraction of this fixed energy is used by primary producers for cellular respiration and maintenance of existing tissues (i.e., "growth respiration" and " maintenance respiration "). [ 1 ] [ 2 ] The remaining fixed energy (i.e., mass of photosynthate) is referred to as net primary production (NPP). Net primary production is the rate at which all the autotrophs in an ecosystem produce net useful chemical energy. Net primary production is available to be directed toward growth and reproduction of primary producers. As such it is available for consumption by herbivores. [ citation needed ] Both gross and net primary production are typically expressed in units of mass per unit area per unit time interval. In terrestrial ecosystems, mass of carbon per unit area per year (g C m −2 yr −1 ) is most often used as the unit of measurement. Note that a distinction is sometimes drawn between "production" and "productivity", with the former the quantity of material produced (g C m −2 ), the latter the rate at which it is produced (g C m −2 yr −1 ), but these terms are more typically used interchangeably. [ citation needed ] On the land, almost all primary production is now performed by vascular plants , with a small fraction coming from algae and non-vascular plants such as mosses and liverworts . Before the evolution of vascular plants, non-vascular plants likely played a more significant role. Primary production on land is a function of many factors, but principally local hydrology and temperature (the latter covaries to an extent with light, specifically photosynthetically active radiation (PAR), the source of energy for photosynthesis). While plants cover much of the Earth's surface, they are strongly curtailed wherever temperatures are too extreme or where necessary plant resources (principally water and PAR) are limiting, such as deserts or polar regions . [ citation needed ] Water is "consumed" in plants by the processes of photosynthesis (see above) and transpiration . The latter process (which is responsible for about 90% of water use) is driven by the evaporation of water from the leaves of plants. Transpiration allows plants to transport water and mineral nutrients from the soil to growth regions, and also cools the plant. Diffusion of water vapour out of a leaf, the force that drives transpiration, is regulated by structures known as stomata . These structures also regulate the diffusion of carbon dioxide from the atmosphere into the leaf, such that decreasing water loss (by partially closing stomata) also decreases carbon dioxide gain. Certain plants use alternative forms of photosynthesis, called Crassulacean acid metabolism (CAM) and C4 . These employ physiological and anatomical adaptations to increase water-use efficiency and allow increased primary production to take place under conditions that would normally limit carbon fixation by C3 plants (the majority of plant species). [ citation needed ] As shown in the animation, the boreal forests of Canada and Russia experience high productivity in June and July and then a slow decline through fall and winter. Year-round, tropical forests in South America, Africa, Southeast Asia, and Indonesia have high productivity, not surprising with the abundant sunlight, warmth, and rainfall. However, even in the tropics, there are variations in productivity over the course of the year. For example, the Amazon basin exhibits especially high productivity from roughly August through October - the period of the area's dry season. Because the trees have access to a plentiful supply of ground water that builds up in the rainy season, they grow better when the rainy skies clear and allow more sunlight to reach the forest. [ 3 ] In a reversal of the pattern on land, in the oceans, almost all photosynthesis is performed by algae, with a small fraction contributed by vascular plants and other groups. Algae encompass a diverse range of organisms, ranging from single floating cells to attached seaweeds . They include photoautotrophs from a variety of groups. Eubacteria are important photosynthetizers in both oceanic and terrestrial ecosystems, and while some archaea are phototrophic , none are known to utilise oxygen-evolving photosynthesis. [ 4 ] A number of eukaryotes are significant contributors to primary production in the ocean, including green algae , brown algae and red algae , and a diverse group of unicellular groups. Vascular plants are also represented in the ocean by groups such as the seagrasses . Unlike terrestrial ecosystems, the majority of primary production in the ocean is performed by free-living microscopic organisms called phytoplankton . Larger autotrophs, such as the seagrasses and macroalgae ( seaweeds ) are generally confined to the littoral zone and adjacent shallow waters, where they can attach to the underlying substrate but still be within the photic zone . There are exceptions, such as Sargassum , but the vast majority of free-floating production takes place within microscopic organisms. The factors limiting primary production in the ocean are also very different from those on land. The availability of water, obviously, is not an issue (though its salinity can be). Similarly, temperature, while affecting metabolic rates (see Q 10 ), ranges less widely in the ocean than on land because the heat capacity of seawater buffers temperature changes, and the formation of sea ice insulates it at lower temperatures. However, the availability of light, the source of energy for photosynthesis, and mineral nutrients , the building blocks for new growth, play crucial roles in regulating primary production in the ocean. [ 5 ] Available Earth System Models suggest that ongoing ocean bio-geochemical changes could trigger reductions in ocean NPP between 3% and 10% of current values depending on the emissions scenario. [ 6 ] The sunlit zone of the ocean is called the photic zone (or euphotic zone). This is a relatively thin layer (10–100 m) near the ocean's surface where there is sufficient light for photosynthesis to occur. For practical purposes, the thickness of the photic zone is typically defined by the depth at which light reaches 1% of its surface value. Light is attenuated down the water column by its absorption or scattering by the water itself, and by dissolved or particulate material within it (including phytoplankton). Net photosynthesis in the water column is determined by the interaction between the photic zone and the mixed layer . Turbulent mixing by wind energy at the ocean's surface homogenises the water column vertically until the turbulence dissipates (creating the aforementioned mixed layer). The deeper the mixed layer, the lower the average amount of light intercepted by phytoplankton within it. The mixed layer can vary from being shallower than the photic zone, to being much deeper than the photic zone. When it is much deeper than the photic zone, this results in phytoplankton spending too much time in the dark for net growth to occur. The maximum depth of the mixed layer in which net growth can occur is called the critical depth . As long as there are adequate nutrients available, net primary production occurs whenever the mixed layer is shallower than the critical depth. Both the magnitude of wind mixing and the availability of light at the ocean's surface are affected across a range of space- and time-scales. The most characteristic of these is the seasonal cycle (caused by the consequences of the Earth's axial tilt ), although wind magnitudes additionally have strong spatial components . Consequently, primary production in temperate regions such as the North Atlantic is highly seasonal, varying with both incident light at the water's surface (reduced in winter) and the degree of mixing (increased in winter). In tropical regions, such as the gyres in the middle of the major basins , light may only vary slightly across the year, and mixing may only occur episodically, such as during large storms or hurricanes . Mixing also plays an important role in the limitation of primary production by nutrients. Inorganic nutrients, such as nitrate , phosphate and silicic acid are necessary for phytoplankton to synthesise their cells and cellular machinery. Because of gravitational sinking of particulate material (such as plankton , dead or fecal material), nutrients are constantly lost from the photic zone, and are only replenished by mixing or upwelling of deeper water. This is exacerbated where summertime solar heating and reduced winds increases vertical stratification and leads to a strong thermocline , since this makes it more difficult for wind mixing to entrain deeper water. Consequently, between mixing events, primary production (and the resulting processes that leads to sinking particulate material) constantly acts to consume nutrients in the mixed layer, and in many regions this leads to nutrient exhaustion and decreased mixed layer production in the summer (even in the presence of abundant light). However, as long as the photic zone is deep enough, primary production may continue below the mixed layer where light-limited growth rates mean that nutrients are often more abundant. Another factor relatively recently discovered to play a significant role in oceanic primary production is the micronutrient iron . [ 7 ] This is used as a cofactor in enzymes involved in processes such as nitrate reduction and nitrogen fixation . A major source of iron to the oceans is dust from the Earth's deserts , picked up and delivered by the wind as aeolian dust . In regions of the ocean that are distant from deserts or that are not reached by dust-carrying winds (for example, the Southern and North Pacific oceans), the lack of iron can severely limit the amount of primary production that can occur. These areas are sometimes known as HNLC (High-Nutrient, Low-Chlorophyll) regions, because the scarcity of iron both limits phytoplankton growth and leaves a surplus of other nutrients. Some scientists have suggested introducing iron to these areas as a means of increasing primary productivity and sequestering carbon dioxide from the atmosphere. [ 8 ] The methods for measurement of primary production vary depending on whether gross vs net production is the desired measure, and whether terrestrial or aquatic systems are the focus. Gross production is almost always harder to measure than net, because of respiration, which is a continuous and ongoing process that consumes some of the products of primary production (i.e. sugars) before they can be accurately measured. Also, terrestrial ecosystems are generally more difficult because a substantial proportion of total productivity is shunted to below-ground organs and tissues, where it is logistically difficult to measure. Shallow water aquatic systems can also face this problem. Scale also greatly affects measurement techniques. The rate of carbon assimilation in plant tissues, organs, whole plants, or plankton samples can be quantified by biochemically based techniques , but these techniques are decidedly inappropriate for large scale terrestrial field situations. There, net primary production is almost always the desired variable, and estimation techniques involve various methods of estimating dry-weight biomass changes over time. Biomass estimates are often converted to an energy measure, such as kilocalories, by an empirically determined conversion factor. In terrestrial ecosystems, researchers generally measure net primary production (NPP). Although its definition is straightforward, field measurements used to estimate productivity vary according to investigator and biome. Field estimates rarely account for below ground productivity, herbivory, turnover, litterfall , volatile organic compounds , root exudates, and allocation to symbiotic microorganisms. Biomass based NPP estimates result in underestimation of NPP due to incomplete accounting of these components. [ 9 ] [ 10 ] However, many field measurements correlate well to NPP. There are a number of comprehensive reviews of the field methods used to estimate NPP. [ 9 ] [ 10 ] [ 11 ] Estimates of ecosystem respiration , the total carbon dioxide produced by the ecosystem, can also be made with gas flux measurements . The major unaccounted pool is belowground productivity, especially production and turnover of roots. Belowground components of NPP are difficult to measure. BNPP (below-ground NPP) is often estimated based on a ratio of ANPP:BNPP (above-ground NPP:below-ground NPP) rather than direct measurements. Gross primary production can be estimated from measurements of net ecosystem exchange (NEE) of carbon dioxide made by the eddy covariance technique . During night, this technique measures all components of ecosystem respiration. This respiration is scaled to day-time values and further subtracted from NEE. [ 12 ] Most frequently, peak standing biomass is assumed to measure NPP. In systems with persistent standing litter, live biomass is commonly reported. Measures of peak biomass are more reliable if the system is predominantly annuals. However, perennial measurements could be reliable if there were a synchronous phenology driven by a strong seasonal climate. These methods may underestimate ANPP in grasslands by as much as 2 ( temperate ) to 4 ( tropical ) fold. [ 10 ] Repeated measures of standing live and dead biomass provide more accurate estimates of all grasslands, particularly those with large turnover, rapid decomposition, and interspecific variation in timing of peak biomass. Wetland productivity (marshes and fens) is similarly measured. In Europe , annual mowing makes the annual biomass increment of wetlands evident. Methods used to measure forest productivity are more diverse than those of grasslands. Biomass increment based on stand specific allometry plus litterfall is considered a suitable although incomplete accounting of above-ground net primary production (ANPP). [ 9 ] Field measurements used as a proxy for ANPP include annual litterfall, diameter or basal area increment ( DBH or BAI), and volume increment. In aquatic systems, primary production is typically measured using one of six main techniques: [ 13 ] The technique developed by Gaarder and Gran uses variations in the concentration of oxygen under different experimental conditions to infer gross primary production. Typically, three identical transparent vessels are filled with sample water and stoppered . The first is analysed immediately and used to determine the initial oxygen concentration; usually this is done by performing a Winkler titration . The other two vessels are incubated, one each in under light and darkened. After a fixed period of time, the experiment ends, and the oxygen concentration in both vessels is measured. As photosynthesis has not taken place in the dark vessel, it provides a measure of ecosystem respiration . The light vessel permits both photosynthesis and respiration, so provides a measure of net photosynthesis (i.e. oxygen production via photosynthesis subtract oxygen consumption by respiration). Gross primary production is then obtained by adding oxygen consumption in the dark vessel to net oxygen production in the light vessel. The technique of using 14 C incorporation (added as labelled Na 2 CO 3 ) to infer primary production is most commonly used today because it is sensitive, and can be used in all ocean environments. As 14 C is radioactive (via beta decay ), it is relatively straightforward to measure its incorporation in organic material using devices such as scintillation counters . Depending upon the incubation time chosen, net or gross primary production can be estimated. Gross primary production is best estimated using relatively short incubation times (1 hour or less), since the loss of incorporated 14 C (by respiration and organic material excretion / exudation) will be more limited. Net primary production is the fraction of gross production remaining after these loss processes have consumed some of the fixed carbon. Loss processes can range between 10 and 60% of incorporated 14 C according to the incubation period, ambient environmental conditions (especially temperature) and the experimental species used. Aside from those caused by the physiology of the experimental subject itself, potential losses due to the activity of consumers also need to be considered. This is particularly true in experiments making use of natural assemblages of microscopic autotrophs, where it is not possible to isolate them from their consumers. The methods based on stable isotopes and O 2 /Ar ratios have the advantage of providing estimates of respiration rates in the light without the need of incubations in the dark. Among them, the method of the triple oxygen isotopes and O 2 /Ar have the additional advantage of not needing incubations in closed containers and O 2 /Ar can even be measured continuously at sea using equilibrator inlet mass spectrometry (EIMS) [ 20 ] or a membrane inlet mass spectrometry (MIMS). [ 21 ] However, if results relevant to the carbon cycle are desired, it is probably better to rely on methods based on carbon (and not oxygen) isotopes. It is important to notice that the method based on carbon stable isotopes is not simply an adaptation of the classic 14 C method, but an entirely different approach that does not suffer from the problem of lack of account of carbon recycling during photosynthesis. As primary production in the biosphere is an important part of the carbon cycle , estimating it at the global scale is important in Earth system science . However, quantifying primary production at this scale is difficult because of the range of habitats on Earth, and because of the impact of weather events (availability of sunlight, water) on its variability. Using satellite -derived estimates of the Normalized Difference Vegetation Index (NDVI) for terrestrial habitats and sea-surface chlorophyll for the oceans, it is estimated that the total (photoautotrophic) primary production for the Earth was 104.9 petagrams of carbon per year (Pg C yr −1 ; equivalent to the non- SI Gt C yr −1 ). [ 22 ] Of this, 56.4 Pg C yr −1 (53.8%), was the product of terrestrial organisms, while the remaining 48.5 Pg C yr −1 , was accounted for by oceanic production. Scaling ecosystem-level GPP estimations based on eddy covariance measurements of net ecosystem exchange (see above) to regional and global values using spatial details of different predictor variables, such as climate variables and remotely sensed fAPAR or LAI led to a terrestrial gross primary production of 123±8 Gt carbon (NOT carbon dioxide) per year during 1998-2005 [ 23 ] In areal terms, it was estimated that land production was approximately 426 g C m −2 yr −1 (excluding areas with permanent ice cover), while that for the oceans was 140 g C m −2 yr −1 . [ 22 ] Another significant difference between the land and the oceans lies in their standing stocks - while accounting for almost half of total production, oceanic autotrophs only account for about 0.2% of the total biomass. Present day primary productivity can be estimated through a variety of methodologies including ship-board measurements, satellites and terrestrial observatories. Historical estimates have relied on biogeochemical models and geochemical proxies. One example is using barium , where barite concentrations in marine sediments rise in line with carbon export production at the surface. [ 24 ] [ 25 ] [ 26 ] Another example is using the triple oxygen isotopes of sulfate . [ 27 ] [ 28 ] [ 29 ] Together these records suggest large shifts in primary production throughout Earth's past with notable rises associated with Earth's Great Oxidation Event (approximately 2.4 to 2.0 billion years ago) and the Neoproterozoic (approximately 1.0 to 0.54 billion years ago). [ 29 ] Human societies are part of the Earth's NPP cycle but disproportionately influence it. [ 30 ] In 1996, Josep Garí designed a new indicator of sustainable development based precisely on the estimation of the human appropriation of NPP: he coined it "HANPP" (Human Appropriation of Net Primary Production) and introduced it at the inaugural conference of the European Society for Ecological Economics. [ 31 ] HANPP has since been further developed and widely applied in research on ecological economics and in policy analysis for sustainability. HANPP represents a proxy of the human impact on nature and can be applied to different geographical and global scales. The extensive degree of human use of the Planet's resources, mostly via land use , results in various levels of impact on actual NPP (NPP act ). Although in some regions, such as the Nile valley, irrigation has resulted in a considerable increase in primary production, in most of the Planet, there is a notable trend of NPP reduction due to land changes (ΔNPP LC ) of 9.6% across global land-mass. [ 32 ] In addition to this, end consumption by people raises the total HANPP [ 30 ] to 23.8% of potential vegetation (NPP 0 ). [ 32 ] It is estimated that, in 2000, 34% of the Earth's ice-free land area (12% cropland ; 22% pasture ) was devoted to human agriculture. [ 33 ] This disproportionate amount reduces the energy available to other species, having a marked impact on biodiversity , flows of carbon, water, and energy, and ecosystem services ,. [ 32 ] Scientists have questioned how large this fraction can be before these services break down. [ 34 ] Reductions in NPP are also expected in the ocean as a result of ongoing climate change, potentially impacting marine ecosystems (~10% of global biodiversity) and goods and services (1-5% of global total) that the oceans provide. [ 6 ]
https://en.wikipedia.org/wiki/Primary_production
A primary standard in metrology is a standard that is sufficiently accurate such that it is not calibrated by or subordinate to other standards. Primary standards are defined via other quantities like length, mass and time . Primary standards are used to calibrate other standards referred to as working standards. [ 1 ] [ 2 ] See Hierarchy of Standards . Standards are used in analytical chemistry . Here, a primary standard is typically a reagent which can be weighed easily, and which is so pure that its weight is truly representative of the number of moles of substance contained. Features of a primary standard include: (The last two are not as essential as the first four.) Some examples of primary standards for titration of solutions, based on their high purity, are provided: [ 4 ] Such standards are often used to make standard solutions . These primary standards are used in titration and are essential for determining unknown concentrations [ 1 ] or preparing working standards.
https://en.wikipedia.org/wiki/Primary_standard
Primary succession is the beginning step of ecological succession where species known as pioneer species colonize an uninhabited site, which usually occurs in an environment devoid of vegetation and other organisms. In contrast, secondary succession occurs on substrates that previously supported vegetation before an ecological disturbance . This occurs when smaller disturbances like floods, hurricanes, tornadoes, and fires destroy only the local plant life and leave soil nutrients for immediate establishment by intermediate community species. [ 1 ] In primary succession pioneer species like lichen , algae and fungi as well as abiotic factors like wind and water start to "normalise" the habitat or in other words start to develop soil and other important mechanisms for greater diversity to flourish. Primary succession begins on rock formations, such as volcanoes or mountains, or in a place with no organisms or soil. Primary succession leads to conditions nearer optimum for vascular plant growth; pedogenesis or the formation of soil, and the increased amount of shade are the most important processes. [ 2 ] These pioneer lichen, algae, and fungi are then dominated and often replaced by plants that are better adapted to less harsh conditions, these plants include vascular plants like grasses and some shrubs that are able to live in thin soils that are often mineral-based. Water and nutrient levels increase with the amount of succession exhibited. [ 3 ] The early stages of primary succession are dominated by species with small propagules (seed and spores) which can be dispersed long distances. The early colonizers—often algae , fungi, and lichens —stabilize the substrate. Nitrogen supplies are limited in new soils, and nitrogen-fixing species tend to play an important role early in primary succession. [ 4 ] Unlike in primary succession, the species that dominate secondary succession, are usually present from the start of the process, often in the soil seed bank . In some systems the successional pathways are fairly consistent, and thus, are easy to predict. In others, there are many possible pathways. For example, nitrogen-fixing legumes alter successional trajectories. [ 5 ] Spores of lichen or fungus, being the pioneer species, are spread onto a land of rocks. Then, the rocks are broken down into smaller particles. Organic matter gradually accumulates, favoring the growth of herbaceous plants like grass , ferns and herbs . These plants further improve the habitat by creating more organic matter when they die, and providing habitats for insects and other small animals. [ 6 ] This leads to the occurrence of larger vascular plants like shrubs, or trees. More animals are then attracted to the area and a climax community is reached. Species diversity is also a large influence on the stages of succession, and as succession progresses further, species diversity changes with it. For example, there is far less richness and evenness of microorganisms in the very early stages of succession, but late successional stage bacteria are far more even and rich. [ 7 ] This again supports the hypothesis that as more resources are present in later stages of succession, there is enough to support a more diverse ecosystem with many different reproductive strategies. A 2000 case study suggests that plant species composition is more important to later-successional species than simply having high plant diversity early on. [ 8 ] One example of primary succession takes place after a volcano has erupted. The lava flows into the ocean and hardens into new land. The resulting barren land is first colonized by pioneer organisms, like algae, which pave the way for later, less hardy plants, such as hardwood trees , by facilitating pedogenesis , especially through the biotic acceleration of weathering and the addition of organic debris to the surface regolith . An example of this is the island of Surtsey , which is an island formed in 1963 after a volcanic eruption from beneath the sea. Surtsey is off the south coast of Iceland and is being monitored to observe primary succession in progress. About thirty species of plant had become established by 2008 and more species continue to arrive, at a typical rate of roughly 2–5 new species per year. [ 9 ] A volcanic eruption occurred on Mount St. Helens as well, with primary succession beginning after the destruction of the region's ecosystem. In Mount St. Helens' primary succession, the region was heavily isolated. This type of incident causes the rate of primary succession to be rather low, as many species that excel in establishment lack the ability to effectively disperse into the new frontier. [ 10 ] The opposite is true as well, as species that were not very good at establishing could not survive, even with high dispersal rates. The region has almost no organic materials to utilize, which was especially significant at Mount St. Helens, as its isolated location prevented succession to occur at the periphery of the destruction site. Initially effective long distance colonizers are rare, as they are only truly effective after an initial colonizer has helped to change the region into more suitable conditions. [ 11 ] This is why primary succession was slow in the destroyed region around Mount St. Helens. Another example is taking place on Signy Island in the South Orkney Islands of Antarctica, due to glacier retreat . Glacier retreat is becoming more normal with the warming climate, and lichens and mosses are the first colonizers. The study, conducted by Favero-Longo et al. found that lichen species diversity varies based on the environmental conditions of the previously existing earth that is first exposed, and the lichens' reproductive patterns. [ 12 ] By analyzing a case study in Grand Bend, Ontario, a full understanding of the distinction between primary and secondary succession can be accomplished. The two species, Juniperus virginiana and Quercus prinoides, are quickly reproducing and spreading grasses that are associated with primary succession in the dunes of Grand Bend's beaches. [ 13 ] They are classified as r selected species, with high mortality, quick reproduction, and a distinct ability to survive in harsh and nutrient-low conditions. In contrast, ecological development after primary succession completes often leads to a more heavily k selected population, which has lower mortality and slower reproduction rates. In the Grand Bend, this is shown through the succession of oak-pine forests, and the continued reduction of r selected grasses. The timescale is also relevant, as the secondary succession of oak-pine forests occurs approximately 2,900 years after the initial cases of primary succession, while the end of solely grassland dominated dunes occurs around 1,600 years after the beginning of primary succession. [ 13 ] This is extremely important, as it shows a 1,300 year intermittent period in which primary succession is overcome by secondary succession. This period is likely characterized by high species diversity, a mix of k and r selected species, and high community productivity. It is a well-supported principle that an intermediate between k and r dominated populations leads to high productivity and species diversity, while the secondary succession afterwards leads towards climax communities with low species diversity. During this 1,300 year period, it is likely that resources grew into a surplus, which reduced species diversity, resulting in the k dominated oak-pine forest. It is very difficult to determine exactly what events will hinder or support the growth of a community, as shown in the following example. Very few seedlings survive for a long period of time during primary succession, with 1.7% of seedlings in an outwash plain named Skeiðarársandur in southeast Iceland lasting from 2005 to 2007. [ 14 ] The rest were replaced by new colonizers, as the mortality rates for r selected species like these are extremely high. This is a very important phenomenon to observe, as even though population sizes may remain consistent throughout the history of a region, it is highly likely that many of the r selected organisms present are entirely new organisms. This is one of many factors that are highly unpredictable in the scale of ecological succession.
https://en.wikipedia.org/wiki/Primary_succession
Primate Conservation is a journal published by the IUCN Species Survival Commission's Primate Specialist Group about the world's primates. First published as a mimeographed newsletter in 1981, the journal today publishes conservation research and papers on primate species, particularly status surveys and studies on distribution and ecology. [ 1 ] Besides these regular papers, the journal has also been a significant place for primatologists to publish descriptions of new primate species in Primate Conservation . From South America, this includes the Caquetá titi ( Callicebus caquetensis ) described in 2010 [ 2 ] [ 3 ] and the Madidi titi ( Plecturocebus aureipalatii , Syn.: Callicebus aureipalatii ). [ 4 ] From the island of Madagascar , new lemur species scientifically described in the pages of the journal include the Montagne d'Ambre dwarf lemur or Andy Sabin's dwarf lemur ( Cheirogaleus andysabini ), [ 5 ] the Ankarana dwarf lemur ( Cheirogaleus shethi ), [ 6 ] and two new species of mouse lemurs ( Microcebus ). [ 7 ]
https://en.wikipedia.org/wiki/Primate_Conservation_(journal)
Primate cognition is the study of the intellectual and behavioral skills of non-human primates , particularly in the fields of psychology , behavioral biology , primatology , and anthropology . [ 1 ] Primates are capable of high levels of cognition; some make tools and use them to acquire foods and for social displays; [ 2 ] [ 3 ] some have sophisticated hunting strategies requiring cooperation, influence and rank; [ 4 ] they are status conscious, manipulative and capable of deception; [ 5 ] they can recognise kin and conspecifics ; [ 6 ] [ 7 ] they can learn to use symbols and understand aspects of human language including some relational syntax, concepts of number and numerical sequence. [ 8 ] [ 9 ] [ 10 ] Theory of mind (also known as mental state attribution, mentalizing, or mindreading) can be defined as the "ability to track the unobservable mental states, like desires and beliefs, that guide others' actions". [ 11 ] Premack and Woodruff's 1978 article "Does the chimpanzee have a theory of mind ?" sparked a contentious issue because of the problem of inferring from animal behavior the existence of thinking , of the existence of a concept of self or self-awareness , or of particular thoughts. [ 12 ] Non-human research still has a major place in this field, however, and is especially useful in illuminating which nonverbal behaviors signify components of theory of mind, and in pointing to possible stepping points in the evolution of what many claim to be a uniquely human aspect of social cognition. [ 13 ] [ 14 ] [ 15 ] While it is difficult to study human-like theory of mind and mental states in species which we do not yet describe as "minded" at all, and about whose potential mental states we have an incomplete understanding, researchers can focus on simpler components of more complex capabilities. For example, many researchers focus on animals' understanding of intention, gaze, perspective, or knowledge (or rather, what another being has seen). Part of the difficulty in this line of research is that observed phenomena can often be explained as simple stimulus-response learning, since mental states can often be inferred based on observed behavioural cues. [ 11 ] Recently, most non-human theory of mind research has focused on monkeys and great apes , who are of most interest in the study of the evolution of human social cognition. Research can be categorized in to three subsections of theory of mind: attribution of intentions, attribution of knowledge (and perception), and attribution of belief. There has been some controversy over the interpretation of evidence purporting to show theory of mind ability—or inability—in animals. Part of this debate has involved whether animals are really able to associate cognitive abilities with another individual, or if they are just able to read and understand behavior. [ 20 ] [ 21 ] Povinelli et al. (1990) points out that most evidence in support of great ape theory of mind involves naturalistic settings to which the apes have already adapted through past learning. Their "reinterpretation hypothesis" explains away evidence supporting attribution of mental states to others in chimpanzees as merely evidence of risk-based learning; that is, the chimpanzees learn through experience that certain behaviors in other chimpanzees have a probability of leading to certain responses, without necessarily attributing knowledge or other intentional states to those other chimpanzees. They have proposed testing theory of mind abilities in great apes in novel, and not naturalistic settings. [ 22 ] Experimenters since then, such as demonstrated in Krupenye et al. (2016), have gone to extensive lengths to control for behavioral cues by placing the apes in novel settings as suggested by Povinelli and colleagues. Research has shown that there is substantial evidence for some non-human primates to track the mental state, like desires and beliefs, of other individuals that cannot be deduced to a response of learned behavioural cues. [ 19 ] For most of the 20th century, scientists who studied primates thought of vocalizations as physical responses to emotions and external stimuli. [ 23 ] The first observations of primate vocalizations representing and referring to events in the exterior world were observed in vervet monkeys in 1967. [ 24 ] Calls with specific intent, such as alarm calls or mating calls has been observed in many orders of animals, including primates. Researchers began to study vervet monkey vocalizations in more depth as a result of this finding. In the seminal study on vervet monkeys, researchers played recordings of three different types of vocalizations they use as alarm calls for leopards, eagles, and pythons. Vervet monkeys in this study responded to each call accordingly: going up trees for leopard calls, searching for predators in the sky for eagle calls, and looking down for snake calls. [ 25 ] This indicated a clear communication that there is a predator nearby and what kind of predator it is, eliciting a specific response. The use of recorded sounds, as opposed to observations in the wild, gave researchers insight into the fact that these calls contain meaning about the external world. [ 26 ] This study also produced evidence that suggests vervet monkeys improve in their ability to classify different predators and produce alarm calls for each predator as they get older. Further research into this phenomenon has discovered that infant vervet monkeys produce alarm calls for a wider variety of species than adults. Adults only use alarm calls for leopards, eagles, and pythons while infants produce alarm calls for land mammals, birds, and snakes respectively. Data suggests that infants learn how to use and respond to alarm calls by watching their parents. [ 27 ] A different species of monkeys, the wild Campbell's monkeys have also been known to produce a sequence of vocalization that require a specific order to elicit a specific behaviour in other monkeys. Changing the order of the sounds changes the resulting behaviour, or meaning, of the call. Diana monkeys were studied in a habituation-dishabituation experiment that demonstrated the ability to attend to the semantic content of calls rather than simply to acoustic nature. Primates have also been observed responding to alarm calls of other species. Crested Guinea fowl , a ground-dwelling fowl, produce a single type of alarm call for all predators it detects. Diana monkeys have been observed to respond to the most likely reason for the call, typically a human or leopard, based on the situation and respond according to that. If they deem a leopard is the more likely predator in the vicinity they will produce their own leopard-specific alarm call but if they think it is a human, they will remain silent and hidden. The ability for non-human primates to understand call systems that belong to a different species of monkey happens but to a limited extent. In this case Diana monkeys and Campbell's monkeys often form mixed species groups but they seem only to respond to each other's danger related calls. [ 28 ] There are many reports of primates making or using tools, both in the wild or when captive. Chimpanzees , gorillas , orangutans , capuchin monkeys , baboons , and mandrills have all been reported as using tools. The use of tools by primates is varied and includes hunting (mammals, invertebrates, [ 29 ] fish), collecting honey, [ 30 ] processing food (nuts, fruits, vegetables and seeds), collecting water, weapons and shelter. Tool making is much rarer, but has been documented in orangutans, [ 31 ] bonobos and bearded capuchin monkeys. Research in 2007 shows that chimpanzees in the Fongoli savannah sharpen sticks to use as spears when hunting, considered the first evidence of systematic use of weapons in a species other than humans. [ 32 ] [ 33 ] Captive gorillas have made a variety of tools. [ 34 ] In the wild, mandrills have been observed to clean their ears with modified tools. Scientists filmed a large male mandrill at Chester Zoo (UK) stripping down a twig, apparently to make it narrower, and then using the modified stick to scrape dirt from underneath its toenails. [ 35 ] [ 36 ] There is some more recent controversy over whether tool use represents a higher level of physical cognition, although this contradicts a long held tradition of tool use as conferring the highest status in the animal world. One study suggests that primates could use tools due to environmental or motivational clues, rather than an understanding of folk physics or a capacity for future planning. [ 37 ] In 1913, Wolfgang Köhler started writing a book on problem solving titled The Mentality of Apes (1917). In this research, Köhler observed the manner in which chimpanzees solve problems, such as that of retrieving bananas when positioned out of reach. He found that they stacked wooden crates to use as makeshift ladders in order to retrieve the food. If the bananas were placed on the ground outside of the cage, they used sticks to lengthen the reach of their arms. Köhler concluded that the chimps had not arrived at these methods through trial-and-error (which American psychologist Edward Thorndike had claimed to be the basis of all animal learning, through his law of effect ), but rather that they had experienced an insight (sometimes known as the Eureka effect or an "aha" experience), in which, having realized the answer, they then proceeded to carry it out in a way that was, in Köhler's words, "unwaveringly purposeful." According to numerous published studies, apes are able to answer human questions, and the vocabulary of the acculturated apes contains question words. [ 38 ] [ 39 ] [ 40 ] [ 41 ] [ 42 ] Despite these abilities, the published research literature did not include instances of apes asking questions themselves; in human-primate conversations, questions were exclusively asked by humans. Ann and David Premack designed a methodology to teach apes to ask questions in the 1970s: "In principle, interrogation can be taught either by removing an element from a familiar situation in the animal's world or by removing the element from a language that maps the animal's world. It is probable that one can induce questions by purposefully removing key elements from a familiar situation. Suppose a chimpanzee received its daily ration of food at a specific time and place, and then one day the food was not there. A chimpanzee trained in the interrogative might inquire 'Where is my food?' or, in Sarah's case, 'My food is?' Sarah was never put in a situation that might induce such interrogation because for our purposes it was easier to teach Sarah to answer questions". [ 43 ] A decade later, the Premacks wrote: "Though [Sarah] understood the question, she did not herself ask any questions—unlike the child who asks interminable questions, such as What that? Who making noise? When Daddy come home? Me go Granny's house? Where puppy? Toy? Sarah never delayed the departure of her trainer after her lessons by asking where the trainer was going, when she was returning, or anything else". [ 44 ] Joseph Jordania suggested that the ability to ask questions could be the crucial cognitive threshold between human and other ape mental abilities. [ 45 ] Jordania suggested that asking questions is not a matter of the ability to use syntactic structures, that it is primarily a matter of cognitive ability. The general factor of intelligence, or g factor , is a psychometric construct that summarizes the correlations observed between an individual's scores on various measures of cognitive abilities . First described in humans, the g factor has since been identified in a number of nonhuman species. [ 46 ] Primates in particular have been the focus of g research due to their close taxonomic links to humans. A principal component analysis run in a meta-analysis of 4,000 primate behaviour papers including 62 species found that 47% of the individual variance in cognitive ability tests was accounted for by a single factor, controlling for socio-ecological variables. [ 46 ] This value fits within the accepted range of the influence of g on IQ . [ 47 ] However, there is some debate as to the influence of g on all primates equally. A 2012 study identifying individual chimpanzees that consistently performed highly on cognitive tasks found clusters of abilities instead of a general factor of intelligence. [ 48 ] This study used individual-based data and claim that their results are not directly comparable to previous studies using group data that have found evidence for g . Further research is required to identify the exact nature of g in primates.
https://en.wikipedia.org/wiki/Primate_cognition
Prime95 , also distributed as the command-line utility mprime for FreeBSD and Linux , is a freeware application written by George Woltman . It is the official client of the Great Internet Mersenne Prime Search (GIMPS), a volunteer computing project dedicated to searching for Mersenne primes . It is also used in overclocking to test for system stability. [ 4 ] Although most [ 5 ] of its source code is available , Prime95 is not free and open-source software because its end-user license agreement [ 3 ] states that if the software is used to find a prime qualifying for a bounty offered by the Electronic Frontier Foundation , [ 6 ] then that bounty will be claimed and distributed by GIMPS. Prime95 tests numbers for primality using the Fermat primality test (referred to internally as PRP, or "probable prime"). For much of its history, it used the Lucas–Lehmer primality test , but the availability of Lucas–Lehmer assignments was deprecated in April 2021 [ 7 ] to increase search throughput. Specifically, to guard against faulty results, every Lucas–Lehmer test had to be performed twice in its entirety, while Fermat tests can be verified in a small fraction of their original run time using a proof generated during the test by Prime95. Current versions of Prime95 remain capable of Lucas–Lehmer testing for the purpose of double-checking existing Lucas–Lehmer results, and for fully verifying "probably prime" Fermat test results (which, unlike "prime" Lucas–Lehmer results, are not conclusive). To reduce the number of full-length primality tests needed, Prime95 first checks numbers for trivial compositeness by attempting to find a small factor . As of 2024, test candidates are mainly filtered using Pollard's p − 1 algorithm . Trial division is implemented, but Prime95 is rarely used for that work in practice because it can be done much more efficiently using a GPU , due to the type of arithmetic involved. Finally, the elliptic-curve factorization method and Williams's p + 1 algorithm are implemented, but are considered not useful at modern GIMPS testing levels and mostly used in attempts to factor much smaller Mersenne numbers that have already undergone primality testing. GIMPS has discovered 18 new Mersenne primes since its foundation in 1996, the first 17 of which using Prime95. The 18th and most recent, M 136279841 , was discovered in October 2024 using an Nvidia GPU, being the first GIMPS discovery to not have used Prime95 and its CPU computation. [ 8 ] [ 9 ] [ 10 ] 15 of the 17 primes discovered with Prime95 were the largest known prime number at the time of their respective discoveries, the exceptions being M 37156667 and M 42643801 , which were discovered out of order from the larger M 43112609 . [ 11 ] To maximize search throughput, most of Prime95 is written in hand-tuned assembly , which makes its system resource usage much greater than most other computer programs. Additionally, due to the high precision requirements of primality testing, the program is very sensitive to computation errors and proactively reports them. These factors make it a commonly used tool among overclockers to check the stability of a particular configuration. [ 4 ]
https://en.wikipedia.org/wiki/Prime95
In mathematics , an element p of a partial order (P, ≤) is a meet prime element when p is the principal element of a principal prime ideal . Equivalently, if P is a lattice , p ≠ top , and for all a , b in P , This algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Prime_(order_theory)
The prime symbol ′ , double prime symbol ″ , triple prime symbol ‴ , and quadruple prime symbol ⁗ are used to designate units and for other purposes in mathematics , science , linguistics and music . Although the characters differ little in appearance from those of the apostrophe and single and double quotation marks , the uses of the prime symbol are quite different. [ 1 ] While an apostrophe is now often used in place of the prime, and a double quote in place of the double prime (due to the lack of prime symbols on everyday writing keyboards), such substitutions are not considered appropriate in formal materials or in typesetting . The prime symbol ′ is commonly used to represent feet (ft) , and the double prime ″ is used to represent inches (in) . [ 2 ] The triple prime ‴ , as used in watchmaking , represents a ligne ( 1 ⁄ 12 of a "French" inch, or pouce , about 2.26 millimetres or 0.089 inches). [ 3 ] Primes are also used for angles . The prime symbol ′ is used for arcminutes ( 1 ⁄ 60 of a degree), and the double prime ″ for arcseconds ( 1 ⁄ 60 of an arcminute). [ 4 ] As an angular measurement, 3° 5 ′ 30″ means 3 degrees , 5 arcminutes and 30 arcseconds. In historical astronomical works, the triple prime was used to denote " thirds " ( 1 ⁄ 60 of an arcsecond) [ 5 ] [ 6 ] and a quadruple prime ⁗ " fourths " ( 1 ⁄ 60 of a third of arc), [ a ] but modern usage has replaced this with decimal fractions of an arcsecond. Primes are sometimes used to indicate minutes, and double primes to indicate seconds of time, as in the John Cage composition 4 ′ 33″ (spoken as "four thirty-three"), a composition that lasts exactly 4 minutes 33 seconds. This notation only applies to duration, and is seldom used for durations longer than 60 minutes. [ 8 ] [ better source needed ] In mathematics, the prime is generally used to generate more variable names for similar things without resorting to subscripts, with x ′ generally meaning something related to (or derived from) x . For example, if a point is represented by the Cartesian coordinates ( x , y ) , then that point rotated, translated or reflected might be represented as ( x ′ , y ′ ) . Usually, the meaning of x ′ is defined when it is first used, but sometimes, its meaning is assumed to be understood: The prime is said to "decorate" the letter to which it applies. The same convention is adopted in functional programming , particularly in Haskell . In geometry , geography and astronomy , prime and double prime are used as abbreviations for minute and second of arc (and thus latitude , longitude , elevation and right ascension ). In physics , the prime is used to denote variables after an event. For example, v A ′ may indicate the velocity of object A after an event. It is also commonly used in relativity: the event at ( x , y , z , t ) in frame S , has coordinates ( x ′ , y ′ , z ′ , t ′ ) in frame S ′ . In chemistry , it is used to distinguish between different functional groups connected to an atom in a molecule, such as R and R ′ , representing different alkyl groups in an organic compound . The carbonyl carbon in proteins is denoted as C ′ , which distinguishes it from the other backbone carbon, the alpha carbon , which is denoted as C α . In physical chemistry , it is used to distinguish between the lower state and the upper state of a quantum number during a transition. For example, J ′ denotes the upper state of the quantum number J while J″ denotes the lower state of the quantum number J . [ 10 ] In molecular biology , the prime is used to denote the positions of carbon on a ring of deoxyribose or ribose . The prime distinguishes places on these two chemicals, rather than places on other parts of DNA or RNA , like phosphate groups or nucleic acids . Thus, when indicating the direction of movement of an enzyme along a string of DNA, biologists will say that it moves from the 5 ′ end to the 3 ′ end, because these carbons are on the ends of the DNA molecule. The chemistry of this reaction demands that the 3 ′ OH be extended by DNA synthesis. Prime can also be used to indicate which position a molecule has attached to, such as 5 ′ -monophosphate. The prime can be used in the transliteration of some languages , such as Slavic languages , to denote palatalization . Prime and double prime are used to transliterate Cyrillic yeri (the soft sign, ь) and yer (the hard sign, ъ). [ 11 ] However, in ISO 9 , the corresponding modifier letters are used instead. Originally, X-bar theory used a bar over syntactic units to indicate bar-levels in syntactic structure , generally rendered as an overbar . While easy to write, the bar notation proved difficult to typeset, leading to the adoption of the prime symbol to indicate a bar. (Despite the lack of bar, the unit would still be read as "X bar", as opposed to "X prime".) With contemporary development of typesetting software such as LaTeX , typesetting bars is considerably simpler; nevertheless, both prime and bar markups are accepted usages. Some X-bar notations use a double prime (standing in for a double-bar) to indicate a phrasal level, indicated in most notations by "XP". The prime symbol is used in combination with lower case letters in the Helmholtz pitch notation system to distinguish notes in different octaves from middle C upwards. Thus c represents the ⟨C⟩ below middle C, c ′ represents middle C, c″ represents the ⟨C⟩ in the octave above middle C, and c‴ the ⟨C⟩ in the octave two octaves above middle C. A combination of upper case letters and sub-prime symbols is used to represent notes in lower octaves. Thus C represents the ⟨C⟩ below the bass stave, while C ͵ represents the ⟨C⟩ in the octave below that. In some musical scores, the double prime ″ is used to indicate a length of time in seconds. It is used over a fermata 𝄐 denoting a long note or rest. [ b ] Unicode and HTML representations of the prime and related symbols are as follows. The " modifier letter prime " and "modifier letter double prime" characters are intended for linguistic purposes, such as the indication of stress or the transliteration of certain Cyrillic characters. [ citation needed ] In a context when the character set used does not include the prime or double prime character (e.g., in an online discussion context where only ASCII or ISO 8859-1 [ISO Latin 1] is expected), they are often respectively approximated by ASCII apostrophe (U+0027) or quotation mark (U+0022). LaTeX provides an oversized prime symbol, \prime ( ′ {\displaystyle \prime } ), which, when used in super- or sub-scripts, renders appropriately; e.g., f_\prime^\prime appears as f ′ ′ {\displaystyle f_{\prime }^{\prime }} . When in math mode, an apostrophe, ' , is a shortcut for a superscript prime; e.g., f' appears as f ′ {\displaystyle f'\,\!} .
https://en.wikipedia.org/wiki/Prime_(symbol)
In algebra , the prime avoidance lemma says that if an ideal I in a commutative ring R is contained in a union of finitely many prime ideals P i 's, then it is contained in P i for some i . There are many variations of the lemma (cf. Hochster); for example, if the ring R contains an infinite field or a finite field of sufficiently large cardinality, then the statement follows from a fact in linear algebra that a vector space over an infinite field or a finite field of large cardinality is not a finite union of its proper vector subspaces. [ 1 ] The following statement and argument are perhaps the most standard. Statement : Let E be a subset of R that is an additive subgroup of R and is multiplicatively closed . Let I 1 , I 2 , … , I n , n ≥ 1 {\displaystyle I_{1},I_{2},\dots ,I_{n},n\geq 1} be ideals such that I i {\displaystyle I_{i}} are prime ideals for i ≥ 3 {\displaystyle i\geq 3} . If E is not contained in any of I i {\displaystyle I_{i}} 's, then E is not contained in the union ∪ I i {\displaystyle \cup I_{i}} . Proof by induction on n : The idea is to find an element that is in E and not in any of I i {\displaystyle I_{i}} 's. The basic case n = 1 is trivial. Next suppose n ≥ 2. For each i , choose where the set on the right is nonempty by inductive hypothesis. We can assume z i ∈ I i {\displaystyle z_{i}\in I_{i}} for all i ; otherwise, some z i {\displaystyle z_{i}} avoids all the I i {\displaystyle I_{i}} 's and we are done. Put Then z is in E but not in any of I i {\displaystyle I_{i}} 's. Indeed, if z is in I i {\displaystyle I_{i}} for some i ≤ n − 1 {\displaystyle i\leq n-1} , then z n {\displaystyle z_{n}} is in I i {\displaystyle I_{i}} , a contradiction. Suppose z is in I n {\displaystyle I_{n}} . Then z 1 … z n − 1 {\displaystyle z_{1}\dots z_{n-1}} is in I n {\displaystyle I_{n}} . If n is 2, we are done. If n > 2, then, since I n {\displaystyle I_{n}} is a prime ideal, some z i , i < n {\displaystyle z_{i},i<n} is in I n {\displaystyle I_{n}} , a contradiction. There is the following variant of prime avoidance due to E. Davis . Theorem — [ 2 ] Let A be a ring, p 1 , … , p r {\displaystyle {\mathfrak {p}}_{1},\dots ,{\mathfrak {p}}_{r}} prime ideals, x an element of A and J an ideal. For the ideal I = x A + J {\displaystyle I=xA+J} , if I ⊄ p i {\displaystyle I\not \subset {\mathfrak {p}}_{i}} for each i , then there exists some y in J such that x + y ∉ p i {\displaystyle x+y\not \in {\mathfrak {p}}_{i}} for each i . Proof: [ 3 ] We argue by induction on r . Without loss of generality, we can assume there is no inclusion relation between the p i {\displaystyle {\mathfrak {p}}_{i}} 's; since otherwise we can use the inductive hypothesis. Also, if x ∉ p i {\displaystyle x\not \in {\mathfrak {p}}_{i}} for each i , then we are done; thus, without loss of generality, we can assume x ∈ p r {\displaystyle x\in {\mathfrak {p}}_{r}} . By inductive hypothesis, we find a y in J such that x + y ∈ I − ∪ 1 r − 1 p i {\displaystyle x+y\in I-\cup _{1}^{r-1}{\mathfrak {p}}_{i}} . If x + y {\displaystyle x+y} is not in p r {\displaystyle {\mathfrak {p}}_{r}} , we are done. Otherwise, note that J ⊄ p r {\displaystyle J\not \subset {\mathfrak {p}}_{r}} (since x ∈ p r {\displaystyle x\in {\mathfrak {p}}_{r}} ) and since p r {\displaystyle {\mathfrak {p}}_{r}} is a prime ideal, we have: Hence, we can choose y ′ {\displaystyle y'} in J p 1 ⋯ p r − 1 {\displaystyle J\,{\mathfrak {p}}_{1}\cdots {\mathfrak {p}}_{r-1}} that is not in p r {\displaystyle {\mathfrak {p}}_{r}} . Then, since x + y ∈ p r {\displaystyle x+y\in {\mathfrak {p}}_{r}} , the element x + y + y ′ {\displaystyle x+y+y'} has the required property. ◻ {\displaystyle \square } Let A be a Noetherian ring , I an ideal generated by n elements and M a finite A - module such that I M ≠ M {\displaystyle IM\neq M} . Also, let d = depth A ⁡ ( I , M ) {\displaystyle d=\operatorname {depth} _{A}(I,M)} = the maximal length of M - regular sequences in I = the length of every maximal M -regular sequence in I . Then d ≤ n {\displaystyle d\leq n} ; this estimate can be shown using the above prime avoidance as follows. We argue by induction on n . Let { p 1 , … , p r } {\displaystyle \{{\mathfrak {p}}_{1},\dots ,{\mathfrak {p}}_{r}\}} be the set of associated primes of M . If d > 0 {\displaystyle d>0} , then I ⊄ p i {\displaystyle I\not \subset {\mathfrak {p}}_{i}} for each i . If I = ( y 1 , … , y n ) {\displaystyle I=(y_{1},\dots ,y_{n})} , then, by prime avoidance, we can choose for some a i {\displaystyle a_{i}} in A {\displaystyle A} such that x 1 ∉ ∪ 1 r p i {\displaystyle x_{1}\not \in \cup _{1}^{r}{\mathfrak {p}}_{i}} = the set of zero divisors on M . Now, I / ( x 1 ) {\displaystyle I/(x_{1})} is an ideal of A / ( x 1 ) {\displaystyle A/(x_{1})} generated by n − 1 {\displaystyle n-1} elements and so, by inductive hypothesis, depth A / ( x 1 ) ⁡ ( I / ( x 1 ) , M / x 1 M ) ≤ n − 1 {\displaystyle \operatorname {depth} _{A/(x_{1})}(I/(x_{1}),M/x_{1}M)\leq n-1} . The claim now follows.
https://en.wikipedia.org/wiki/Prime_avoidance_lemma
The prime constant is the real number ρ {\displaystyle \rho } whose n {\displaystyle n} th binary digit is 1 if n {\displaystyle n} is prime and 0 if n {\displaystyle n} is composite or 1. [ 1 ] In other words, ρ {\displaystyle \rho } is the number whose binary expansion corresponds to the indicator function of the set of prime numbers . That is, where p {\displaystyle p} indicates a prime and χ P {\displaystyle \chi _{\mathbb {P} }} is the characteristic function of the set P {\displaystyle \mathbb {P} } of prime numbers. The beginning of the decimal expansion of ρ is: ρ = 0.414682509851111660248109622 … {\displaystyle \rho =0.414682509851111660248109622\ldots } (sequence A051006 in the OEIS ) [ 1 ] The beginning of the binary expansion is: ρ = 0.011010100010100010100010000 … 2 {\displaystyle \rho =0.011010100010100010100010000\ldots _{2}} (sequence A010051 in the OEIS ) The number ρ {\displaystyle \rho } is irrational . [ 2 ] Suppose ρ {\displaystyle \rho } were rational . Denote the k {\displaystyle k} th digit of the binary expansion of ρ {\displaystyle \rho } by r k {\displaystyle r_{k}} . Then since ρ {\displaystyle \rho } is assumed rational, its binary expansion is eventually periodic, and so there exist positive integers N {\displaystyle N} and k {\displaystyle k} such that r n = r n + i k {\displaystyle r_{n}=r_{n+ik}} for all n > N {\displaystyle n>N} and all i ∈ N {\displaystyle i\in \mathbb {N} } . Since there are an infinite number of primes, we may choose a prime p > N {\displaystyle p>N} . By definition we see that r p = 1 {\displaystyle r_{p}=1} . As noted, we have r p = r p + i k {\displaystyle r_{p}=r_{p+ik}} for all i ∈ N {\displaystyle i\in \mathbb {N} } . Now consider the case i = p {\displaystyle i=p} . We have r p + i ⋅ k = r p + p ⋅ k = r p ( k + 1 ) = 0 {\displaystyle r_{p+i\cdot k}=r_{p+p\cdot k}=r_{p(k+1)}=0} , since p ( k + 1 ) {\displaystyle p(k+1)} is composite because k + 1 ≥ 2 {\displaystyle k+1\geq 2} . Since r p ≠ r p ( k + 1 ) {\displaystyle r_{p}\neq r_{p(k+1)}} we see that ρ {\displaystyle \rho } is irrational.
https://en.wikipedia.org/wiki/Prime_constant
Prime editing is a 'search-and-replace' genome editing technology in molecular biology by which the genome of living organisms may be modified. The technology directly writes new genetic information into a targeted DNA site. It uses a fusion protein , consisting of a catalytically impaired Cas9 endonuclease fused to an engineered reverse transcriptase enzyme, and a prime editing guide RNA (pegRNA), capable of identifying the target site and providing the new genetic information to replace the target DNA nucleotides. It mediates targeted insertions , deletions , and base-to-base conversions without the need for double strand breaks (DSBs) or donor DNA templates. [ 1 ] The technology has received mainstream press attention due to its potential uses in medical genetics. It utilizes methodologies similar to precursor genome editing technologies, including CRISPR/Cas9 and base editors . Prime editing has been used on some animal models of genetic disease [ 2 ] [ 3 ] [ 4 ] and plants. [ 5 ] Prime editing involves three major components: [ 1 ] Genomic editing takes place by transfecting cells with the pegRNA and the fusion protein. Transfection is often accomplished by introducing vectors into a cell. Once internalized, the fusion protein nicks the target DNA sequence, exposing a 3’-hydroxyl group that can be used to initiate (prime) the reverse transcription of the RT template portion of the pegRNA. This results in a branched intermediate that contains two DNA flaps: a 3’ flap that contains the newly synthesized (edited) sequence, and a 5’ flap that contains the dispensable, unedited DNA sequence. The 5’ flap is then cleaved by structure-specific endonucleases or 5’ exonucleases . This process allows 3’ flap ligation, and creates a heteroduplex DNA composed of one edited strand and one unedited strand. The reannealed double stranded DNA contains nucleotide mismatches at the location where editing took place. In order to correct the mismatches, the cells exploit the intrinsic mismatch repair (MMR) mechanism, with two possible outcomes: (i) the information in the edited strand is copied into the complementary strand, permanently installing the edit; (ii) the original nucleotides are re-incorporated into the edited strand, excluding the edit. [ 1 ] During the development of this technology, several modifications were done to the components, in order to increase its effectiveness. [ 1 ] In the first system, a wild-type Moloney Murine Leukemia Virus (M-MLV) reverse transcriptase was fused to the Cas9 H840A nickase C-terminus. Detectable editing efficiencies were observed. [ 1 ] In order to enhance DNA-RNA affinity, enzyme processivity, and thermostability, five amino acid substitutions were incorporated into the M-MLV reverse transcriptase. The mutant M-MLV RT was then incorporated into PE1 to give rise to (Cas9 (H840A)-M-MLV RT(D200N/L603W/T330P/T306K/W313F)). Efficiency improvement was observed over PE1. [ 1 ] Despite its increased efficacy, the edit inserted by PE2 might still be removed due to DNA mismatch repair of the edited strand. To avoid this problem during DNA heteroduplex resolution, an additional single guide RNA (sgRNA) is introduced. This sgRNA is designed to match the edited sequence introduced by the pegRNA, but not the original allele. It directs the Cas9 nickase portion of the fusion protein to nick the unedited strand at a nearby site, opposite to the original nick. Nicking the non-edited strand causes the cell's natural repair system to copy the information in the edited strand to the complementary strand, permanently installing the edit. [ 1 ] However, there are drawbacks to this system as nicking the unaltered strand can lead to additional undesired indels . [ 9 ] Prime editor 4 utilizes the same machinery as PE2, but also includes a plasmid that encodes for dominant negative MMR protein MLH1 . Dominant negative MLH1 is able to essentially knock out endogenous MLH1 by inhibition, thereby reducing cellular MMR response and increasing prime editing efficiency. [ 9 ] Prime editor 5 utilizes the same machinery as PE3, but also includes a plasmid that encodes for dominant negative MLH1. Like PE4, this allows for a knockdown of endogenous MMR response, increasing the efficiency of prime editing. [ 9 ] Nuclease Prime Editor uses Cas9 nuclease instead of Cas9(H840A) nickase. Unlike prime editor 3 (PE3) that requires dual-nick at both DNA strands to induce efficient prime editing, Nuclease Prime Editor requires only a single pegRNA since the single-gRNA already creates double-strand break instead of single-strand nick. [ 10 ] The "twin prime editing" (twinPE) mechanism reported in 2021 allows editing large sequences of DNA – sequences as large as genes – which addresses the method's key drawback. It uses a prime editor protein and two prime editing guide RNAs. [ 11 ] [ 12 ] [ more detail needed ] Prime editing was developed in the lab of David R. Liu at the Broad Institute and disclosed in Anzalone et al. (2019). [ 13 ] Since then prime editing and the research that produced it have received widespread scientific acclaim, [ 14 ] [ 6 ] [ 15 ] being called "revolutionary" [ 7 ] and an important part of the future of editing. [ 13 ] Prime editing efficiency can be increased with the use of engineered pegRNAs (epegRNAs). One common issue with traditional pegRNAs is degradation of the 3' end, leading to decreased PE efficiency. epegRNAs have a structured RNA motif added to their 3' end to prevent degradation. [ 16 ] Although additional research is required to improve the efficiency of prime editing, the technology offers promising scientific improvements over other gene editing tools. The prime editing technology has the potential to correct the vast majority of pathogenic alleles that cause genetic diseases, as it can repair insertions, deletions, and nucleotide substitutions. [ 1 ] The prime editing tool offers advantages over traditional gene editing technologies. CRISPR/Cas9 edits rely on non-homologous end joining (NHEJ) or homology-directed repair (HDR) to fix DNA breaks, while the prime editing system employs DNA mismatch repair . This is an important feature of this technology given that DNA repair mechanisms such as NHEJ and HDR, generate unwanted, random insertions or deletions (INDELs). These are byproducts that complicate the retrieval of cells carrying the correct edit. [ 1 ] [ 17 ] The prime system introduces single-stranded DNA breaks instead of the double-stranded DNA breaks observed in other editing tools, such as base editors. Collectively, base editing and prime editing offer complementary strengths and weaknesses for making targeted transition mutations. Base editors offer higher editing efficiency and fewer INDEL byproducts if the desired edit is a transition point mutation and a PAM sequence exists roughly 15 bases from the target site. However, because the prime editing technology does not require a precisely positioned PAM sequence to target a nucleotide sequence, it offers more flexibility and editing precision. Remarkably, prime editors allow all types of substitutions, transitions and transversions to be inserted into the target sequence. [ 1 ] [ 17 ] Cytosine base editing and adenine BE can already perform precise base transitions but for base transversions there have been no good options. Prime editing performs transversions with good usability. PE can insert up to 44bp, delete up to 80, or combinations thereof. [ 7 ] Because the prime system involves three separate DNA binding events (between (i) the guide sequence and the target DNA, (ii) the primer binding site and the target DNA, and (iii) the 3’ end of the nicked DNA strand and the pegRNA), it has been suggested to have fewer undesirable off-target effects than CRISPR/Cas9 . [ 1 ] [ 17 ] There is considerable interest in applying gene-editing methods to the treatment of diseases with a genetic component. However, there are multiple challenges associated with this approach. An effective treatment would require editing of a large number of target cells, which in turn would require an effective method of delivery and a great level of tissue specificity. [ 1 ] [ 18 ] As of 2019, prime editing looks promising for relatively small genetic alterations, but more research needs to be conducted to evaluate whether the technology is efficient in making larger alterations, such as targeted insertions and deletions. Larger genetic alterations would require a longer RT template, which could hinder the efficient delivery of pegRNA to target cells. Furthermore, a pegRNA containing a long RT template could become vulnerable to damage caused by cellular enzymes. [ 1 ] [ 18 ] Prime editing in plants suffers from low efficiency ranging from zero to a few percent and needs significant improvement. [ 19 ] Some of these limitations have been mitigated by recent improvements to the prime editors, [ 2 ] [ 20 ] including motifs that protect pegRNAs from degradation. [ 21 ] Further research is needed before prime editing could be used to correct pathogenic alleles in humans. [ 1 ] [ 18 ] Research has also shown that inhibition of certain MMR proteins, including MLH1 can improve prime editing efficiency. [ 9 ] Base editors used for prime editing require delivery of both a protein and RNA molecule into living cells. Introducing exogenous gene editing technologies into living organisms is a significant challenge. One potential way to introduce a base editor into animals and plants is to package the base editor into a viral capsid. The target organism can then be transduced by the virus to synthesize the base editor in vivo . Common laboratory vectors of transduction such as lentivirus cause immune responses in humans, so proposed human therapies often centered around adeno-associated virus (AAV) because AAV infections are largely asymptomatic. Unfortunately, the effective packaging capacity of AAV vectors is small, approximately 4.4kb not including inverted terminal repeats. [ 22 ] As a comparison, an SpCas9-reverse transcriptase fusion protein is 6.3kb, [ 1 ] [ 23 ] which does not even account for the lengthened guide RNA necessary for targeting and priming the site of interest. However, successful delivery in mice has been achieved by splitting the editor into two AAV vectors [ 2 ] [ 3 ] [ 4 ] [ 24 ] or by using an adenovirus, [ 3 ] which has a larger packaging capacity. Prime editors may be used in gene drives . A prime editor may be incorporated into the Cleaver half of a Cleave and Rescue / ClvR system. In this case it is not meant to perform a precise alteration but instead to merely disrupt. [ 25 ] PE is among recently introduced technologies which allow the transfer of single-nucleotide polymorphisms (SNPs) from one individual crop plant to another. PE is precise enough to be used to recreate an arbitrary SNP in an arbitrary target, [ 14 ] including deletions, insertions, and all 12 point mutations without also needing to perform a double-stranded break or carry a donating template. [ 6 ]
https://en.wikipedia.org/wiki/Prime_editing
In his 1557 work The Whetstone of Witte , British mathematician Robert Recorde proposed an exponent notation by prime factorisation , which remained in use up until the eighteenth century and acquired the name Arabic exponent notation . The principle of Arabic exponents was quite similar to Egyptian fractions ; large exponents were broken down into smaller prime numbers. Squares and cubes were so called; prime numbers from five onwards were called sursolids . Although the terms used for defining exponents differed between authors and times, the general system was the primary exponent notation until René Descartes devised the Cartesian exponent notation, which is still used today. This is a list of Recorde's terms. By comparison, here is a table of prime factors: This algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Prime_factor_exponent_notation
In mathematics , a prime geodesic on a hyperbolic surface is a primitive closed geodesic , i.e. a geodesic which is a closed curve that traces out its image exactly once. Such geodesics are called prime geodesics because, among other things, they obey an asymptotic distribution law similar to the prime number theorem . We briefly present some facts from hyperbolic geometry which are helpful in understanding prime geodesics. Consider the Poincaré half-plane model H of 2-dimensional hyperbolic geometry . Given a Fuchsian group , that is, a discrete subgroup Γ of PSL(2, R ) , Γ acts on H via linear fractional transformation . Each element of PSL(2, R ) in fact defines an isometry of H , so Γ is a group of isometries of H . There are then 3 types of transformation: hyperbolic, elliptic, and parabolic. (The loxodromic transformations are not present because we are working with real numbers .) Then an element γ of Γ has 2 distinct real fixed points if and only if γ is hyperbolic. See Classification of isometries and Fixed points of isometries for more details. Now consider the quotient surface M =Γ\ H . The following description refers to the upper half-plane model of the hyperbolic plane . This is a hyperbolic surface, in fact, a Riemann surface . Each hyperbolic element h of Γ determines a closed geodesic of Γ\ H : first, by connecting the geodesic semicircle joining the fixed points of h , we get a geodesic on H called the axis of h , and by projecting this geodesic to M , we get a geodesic on Γ\ H . This geodesic is closed because 2 points which are in the same orbit under the action of Γ project to the same point on the quotient, by definition. It can be shown that this gives a 1-1 correspondence between closed geodesics on Γ\ H and hyperbolic conjugacy classes in Γ. The prime geodesics are then those geodesics that trace out their image exactly once — algebraically, they correspond to primitive hyperbolic conjugacy classes, that is, conjugacy classes {γ} such that γ cannot be written as a nontrivial power of another element of Γ. The importance of prime geodesics comes from their relationship to other branches of mathematics, especially dynamical systems , ergodic theory , and number theory , as well as Riemann surfaces themselves. These applications often overlap among several different research fields. In dynamical systems, the closed geodesics represent the periodic orbits of the geodesic flow . In number theory, various "prime geodesic theorems" have been proved which are very similar in spirit to the prime number theorem . To be specific, we let π( x ) denote the number of closed geodesics whose norm (a function related to length) is less than or equal to x ; then π( x ) ~ x /ln( x ). This result is usually credited to Atle Selberg . In his 1970 Ph.D. thesis, Grigory Margulis proved a similar result for surfaces of variable negative curvature, while in his 1980 Ph.D. thesis, Peter Sarnak proved an analogue of Chebotarev's density theorem . There are other similarities to number theory — error estimates are improved upon, in much the same way that error estimates of the prime number theorem are improved upon. Also, there is a Selberg zeta function which is formally similar to the usual Riemann zeta function and shares many of its properties. Algebraically, prime geodesics can be lifted to higher surfaces in much the same way that prime ideals in the ring of integers of a number field can be split (factored) in a Galois extension . See Covering map and Splitting of prime ideals in Galois extensions for more details. Closed geodesics have been used to study Riemann surfaces; indeed, one of Riemann 's original definitions of the genus of a surface was in terms of simple closed curves. Closed geodesics have been instrumental in studying the eigenvalues of Laplacian operators , arithmetic Fuchsian groups , and Teichmüller spaces .
https://en.wikipedia.org/wiki/Prime_geodesic
In mathematics , and in particular model theory , [ 1 ] a prime model is a model that is as simple as possible. Specifically, a model P {\displaystyle P} is prime if it admits an elementary embedding into any model M {\displaystyle M} to which it is elementarily equivalent (that is, into any model M {\displaystyle M} satisfying the same complete theory as P {\displaystyle P} ). In contrast with the notion of saturated model , prime models are restricted to very specific cardinalities by the Löwenheim–Skolem theorem . If L {\displaystyle L} is a first-order language with cardinality κ {\displaystyle \kappa } and T {\displaystyle T} is a complete theory over L , {\displaystyle L,} then this theorem guarantees a model for T {\displaystyle T} of cardinality max ( κ , ℵ 0 ) . {\displaystyle \max(\kappa ,\aleph _{0}).} Therefore no prime model of T {\displaystyle T} can have larger cardinality since at the very least it must be elementarily embedded in such a model. This still leaves much ambiguity in the actual cardinality. In the case of countable languages, all prime models are at most countably infinite. There is a duality between the definitions of prime and saturated models. Half of this duality is discussed in the article on saturated models , while the other half is as follows. While a saturated model realizes as many types as possible, a prime model realizes as few as possible: it is an atomic model , realizing only the types that cannot be omitted and omitting the remainder. This may be interpreted in the sense that a prime model admits "no frills": any characteristic of a model that is optional is ignored in it. For example, the model ⟨ N , S ⟩ {\displaystyle \langle {\mathbb {N} },S\rangle } is a prime model of the theory of the natural numbers N with a successor operation S ; a non-prime model might be ⟨ N + Z , S ⟩ , {\displaystyle \langle {\mathbb {N} }+{\mathbb {Z} },S\rangle ,} meaning that there is a copy of the full integers that lies disjoint from the original copy of the natural numbers within this model; in this add-on, arithmetic works as usual. These models are elementarily equivalent; their theory admits the following axiomatization (verbally): These are, in fact, two of Peano's axioms , while the third follows from the first by induction (another of Peano's axioms). Any model of this theory consists of disjoint copies of the full integers in addition to the natural numbers, since once one generates a submodel from 0 all remaining points admit both predecessors and successors indefinitely. This is the outline of a proof that ⟨ N , S ⟩ {\displaystyle \langle {\mathbb {N} },S\rangle } is a prime model.
https://en.wikipedia.org/wiki/Prime_model
In number theory , the prime omega functions ω ( n ) {\displaystyle \omega (n)} and Ω ( n ) {\displaystyle \Omega (n)} count the number of prime factors of a natural number n . {\displaystyle n.} The number of distinct prime factors is assigned to ω ( n ) {\displaystyle \omega (n)} (little omega), while Ω ( n ) {\displaystyle \Omega (n)} (big omega) counts the total number of prime factors with multiplicity (see arithmetic function ). That is, if we have a prime factorization of n {\displaystyle n} of the form n = p 1 α 1 p 2 α 2 ⋯ p k α k {\displaystyle n=p_{1}^{\alpha _{1}}p_{2}^{\alpha _{2}}\cdots p_{k}^{\alpha _{k}}} for distinct primes p i {\displaystyle p_{i}} ( 1 ≤ i ≤ k {\displaystyle 1\leq i\leq k} ), then the prime omega functions are given by ω ( n ) = k {\displaystyle \omega (n)=k} and Ω ( n ) = α 1 + α 2 + ⋯ + α k {\displaystyle \Omega (n)=\alpha _{1}+\alpha _{2}+\cdots +\alpha _{k}} . These prime-factor-counting functions have many important number theoretic relations. The function ω ( n ) {\displaystyle \omega (n)} is additive and Ω ( n ) {\displaystyle \Omega (n)} is completely additive . Little omega has the formula ω ( n ) = ∑ p ∣ n 1 , {\displaystyle \omega (n)=\sum _{p\mid n}1,} where notation p | n indicates that the sum is taken over all primes p that divide n , without multiplicity. For example, ω ( 12 ) = ω ( 2 2 3 ) = 2 {\displaystyle \omega (12)=\omega (2^{2}3)=2} . Big omega has the formulas Ω ( n ) = ∑ p α ∣ n 1 = ∑ p α ∥ n α . {\displaystyle \Omega (n)=\sum _{p^{\alpha }\mid n}1=\sum _{p^{\alpha }\parallel n}\alpha .} The notation p α | n indicates that the sum is taken over all prime powers p α that divide n , while p α || n indicates that the sum is taken over all prime powers p α that divide n and such that n / p α is coprime to p α . For example, Ω ( 12 ) = Ω ( 2 2 3 1 ) = 3 {\displaystyle \Omega (12)=\Omega (2^{2}3^{1})=3} . The omegas are related by the inequalities ω ( n ) ≤ Ω( n ) and 2 ω ( n ) ≤ d ( n ) ≤ 2 Ω( n ) , where d ( n ) is the divisor-counting function . [ 1 ] If Ω( n ) = ω ( n ) , then n is squarefree and related to the Möbius function by If ω ( n ) = 1 {\displaystyle \omega (n)=1} then n {\displaystyle n} is a prime power, and if Ω ( n ) = 1 {\displaystyle \Omega (n)=1} then n {\displaystyle n} is prime. An asymptotic series for the average order of ω ( n ) {\displaystyle \omega (n)} is [ 2 ] where B 1 ≈ 0.26149721 {\displaystyle B_{1}\approx 0.26149721} is the Mertens constant and γ j {\displaystyle \gamma _{j}} are the Stieltjes constants . The function ω ( n ) {\displaystyle \omega (n)} is related to divisor sums over the Möbius function and the divisor function , including: [ 3 ] The characteristic function of the primes can be expressed by a convolution with the Möbius function : [ 4 ] A partition-related exact identity for ω ( n ) {\displaystyle \omega (n)} is given by [ 5 ] where p ( n ) {\displaystyle p(n)} is the partition function , μ ( n ) {\displaystyle \mu (n)} is the Möbius function , and the triangular sequence s n , k {\displaystyle s_{n,k}} is expanded by in terms of the infinite q-Pochhammer symbol and the restricted partition functions s o / e ( n , k ) {\displaystyle s_{o/e}(n,k)} which respectively denote the number of k {\displaystyle k} 's in all partitions of n {\displaystyle n} into an odd ( even ) number of distinct parts. [ 6 ] A continuation of ω ( n ) {\displaystyle \omega (n)} has been found, though it is not analytic everywhere. [ 7 ] Note that the normalized sinc {\displaystyle \operatorname {sinc} } function sinc ⁡ ( x ) = sin ⁡ ( π x ) π x {\displaystyle \operatorname {sinc} (x)={\frac {\sin(\pi x)}{\pi x}}} is used. This is closely related to the following partition identity. Consider partitions of the form where a {\displaystyle a} , b {\displaystyle b} , and c {\displaystyle c} are positive integers, and a > b > c {\displaystyle a>b>c} . The number of partitions is then given by 2 ω ( a ) − 2 {\displaystyle 2^{\omega (a)}-2} . [ 8 ] An average order of both ω ( n ) {\displaystyle \omega (n)} and Ω ( n ) {\displaystyle \Omega (n)} is log ⁡ log ⁡ n {\displaystyle \log \log n} . When n {\displaystyle n} is prime a lower bound on the value of the function is ω ( n ) = 1 {\displaystyle \omega (n)=1} . Similarly, if n {\displaystyle n} is primorial then the function is as large as ω ( n ) ∼ log ⁡ n log ⁡ log ⁡ n {\displaystyle \omega (n)\sim {\frac {\log n}{\log \log n}}} on average order. When n {\displaystyle n} is a power of 2 , then Ω ( n ) = log 2 ⁡ ( n ) . {\displaystyle \Omega (n)=\log _{2}(n).} [ 9 ] Asymptotics for the summatory functions over ω ( n ) {\displaystyle \omega (n)} , Ω ( n ) {\displaystyle \Omega (n)} , and powers of ω ( n ) {\displaystyle \omega (n)} are respectively [ 10 ] [ 11 ] where B 1 ≈ 0.2614972128 {\displaystyle B_{1}\approx 0.2614972128} is the Mertens constant and the constant B 2 {\displaystyle B_{2}} is defined by The sum of number of unitary divisors is ∑ n ≤ x 2 ω ( n ) = ( x log ⁡ x ) / ζ ( 2 ) + O ( x ) {\displaystyle \sum _{n\leq x}2^{\omega (n)}=(x\log x)/\zeta (2)+O(x)} [ 12 ] (sequence A064608 in the OEIS ) Other sums relating the two variants of the prime omega functions include [ 13 ] and In this example we suggest a variant of the summatory functions S ω ( x ) := ∑ n ≤ x ω ( n ) {\displaystyle S_{\omega }(x):=\sum _{n\leq x}\omega (n)} estimated in the above results for sufficiently large x {\displaystyle x} . We then prove an asymptotic formula for the growth of this modified summatory function derived from the asymptotic estimate of S ω ( x ) {\displaystyle S_{\omega }(x)} provided in the formulas in the main subsection of this article above. [ 14 ] To be completely precise, let the odd-indexed summatory function be defined as where [ ⋅ ] {\displaystyle [\cdot ]} denotes Iverson bracket . Then we have that The proof of this result follows by first observing that and then applying the asymptotic result from Hardy and Wright for the summatory function over ω ( n ) {\displaystyle \omega (n)} , denoted by S ω ( x ) := ∑ n ≤ x ω ( n ) {\displaystyle S_{\omega }(x):=\sum _{n\leq x}\omega (n)} , in the following form: The computations expanded in Chapter 22.11 of Hardy and Wright provide asymptotic estimates for the summatory function by estimating the product of these two component omega functions as We can similarly calculate asymptotic formulas more generally for the related summatory functions over so-termed factorial moments of the function ω ( n ) {\displaystyle \omega (n)} . A known Dirichlet series involving ω ( n ) {\displaystyle \omega (n)} and the Riemann zeta function is given by [ 15 ] We can also see that The function Ω ( n ) {\displaystyle \Omega (n)} is completely additive , where ω ( n ) {\displaystyle \omega (n)} is strongly additive (additive) . Now we can prove a short lemma in the following form which implies exact formulas for the expansions of the Dirichlet series over both ω ( n ) {\displaystyle \omega (n)} and Ω ( n ) {\displaystyle \Omega (n)} : Lemma. Suppose that f {\displaystyle f} is a strongly additive arithmetic function defined such that its values at prime powers is given by f ( p α ) := f 0 ( p , α ) {\displaystyle f(p^{\alpha }):=f_{0}(p,\alpha )} , i.e., f ( p 1 α 1 ⋯ p k α k ) = f 0 ( p 1 , α 1 ) + ⋯ + f 0 ( p k , α k ) {\displaystyle f(p_{1}^{\alpha _{1}}\cdots p_{k}^{\alpha _{k}})=f_{0}(p_{1},\alpha _{1})+\cdots +f_{0}(p_{k},\alpha _{k})} for distinct primes p i {\displaystyle p_{i}} and exponents α i ≥ 1 {\displaystyle \alpha _{i}\geq 1} . The Dirichlet series of f {\displaystyle f} is expanded by Proof. We can see that This implies that wherever the corresponding series and products are convergent. In the last equation, we have used the Euler product representation of the Riemann zeta function . The lemma implies that for ℜ ( s ) > 1 {\displaystyle \Re (s)>1} , where P ( s ) {\displaystyle P(s)} is the prime zeta function , h ( n ) = ∑ p k | n 1 k = ∑ p k | | n H k {\displaystyle h(n)=\sum _{p^{k}|n}{\frac {1}{k}}=\sum _{p^{k}||n}{H_{k}}} where H k {\displaystyle H_{k}} is the k {\displaystyle k} -th harmonic number and ε {\displaystyle \varepsilon } is the identity for the Dirichlet convolution , ε ( n ) = ⌊ 1 n ⌋ {\displaystyle \varepsilon (n)=\lfloor {\frac {1}{n}}\rfloor } . The distribution of the distinct integer values of the differences Ω ( n ) − ω ( n ) {\displaystyle \Omega (n)-\omega (n)} is regular in comparison with the semi-random properties of the component functions. For k ≥ 0 {\displaystyle k\geq 0} , define These cardinalities have a corresponding sequence of limiting densities d k {\displaystyle d_{k}} such that for x ≥ 2 {\displaystyle x\geq 2} These densities are generated by the prime products With the absolute constant c ^ := 1 4 × ∏ p > 2 ( 1 − 1 ( p − 1 ) 2 ) − 1 {\displaystyle {\hat {c}}:={\frac {1}{4}}\times \prod _{p>2}\left(1-{\frac {1}{(p-1)^{2}}}\right)^{-1}} , the densities d k {\displaystyle d_{k}} satisfy Compare to the definition of the prime products defined in the last section of [ 16 ] in relation to the Erdős–Kac theorem .
https://en.wikipedia.org/wiki/Prime_omega_function
In astronomy , astrology , and geodesy , the prime vertical or first vertical [ 1 ] is the vertical circle passing east and west through the zenith of a specific location, and intersecting the horizon in its east and west points. In other words, the prime vertical is the vertical circle perpendicular to the meridian , and passes through the east and west points, zenith, and nadir of any place. [ 2 ] A heavenly body is in or on the prime vertical when it bears true east or true west—when it is at right angles to the meridian. When a body is observed on the prime vertical for the purpose of calculating the longitude, a considerable error in the latitude by dead-reckoning (used in the computation) will not appreciably affect the result. By this it will be understood that the best time to observe a longitude sight (be it sun, moon, planet, or star) is when the body is on the prime vertical; but it is to be explained that it is not always possible to obtain such an observation, for a heavenly body can only be true east or true west when its declination is of the same name as the ship's latitude and less than the latter. When the declination of the body is of the same name but greater than the ship's latitude, the body's nearest approach will be some time after it has risen; but when the declination is of a contrary nature to the latitude, the body will be the nearest to the prime vertical at its rising and setting. [ 3 ] This astronomy -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Prime_vertical
In mathematics , a primefree sequence is a sequence of integers that does not contain any prime numbers . More specifically, it usually means a sequence defined by the same recurrence relation as the Fibonacci numbers , but with different initial conditions causing all members of the sequence to be composite numbers that do not all have a common divisor . To put it algebraically, a sequence of this type is defined by an appropriate choice of two composite numbers a 1 and a 2 , such that the greatest common divisor g c d ( a 1 , a 2 ) {\displaystyle \mathrm {gcd} (a_{1},a_{2})} is equal to 1, and such that for n > 2 {\displaystyle n>2} there are no primes in the sequence of numbers calculated from the formula The first primefree sequence of this type was published by Ronald Graham in 1964. A primefree sequence found by Herbert Wilf has initial terms The proof that every term of this sequence is composite relies on the periodicity of Fibonacci-like number sequences modulo the members of a finite set of primes. For each prime p {\displaystyle p} , the positions in the sequence where the numbers are divisible by p {\displaystyle p} repeat in a periodic pattern, and different primes in the set have overlapping patterns that result in a covering set for the whole sequence. The requirement that the initial terms of a primefree sequence be coprime is necessary for the question to be non-trivial. If the initial terms share a prime factor p {\displaystyle p} (e.g., set a 1 = x p {\displaystyle a_{1}=xp} and a 2 = y p {\displaystyle a_{2}=yp} for some x {\displaystyle x} and y {\displaystyle y} both greater than 1), due to the distributive property of multiplication a 3 = ( x + y ) p {\displaystyle a_{3}=(x+y)p} and more generally all subsequent values in the sequence will be multiples of p {\displaystyle p} . In this case, all the numbers in the sequence will be composite, but for a trivial reason. The order of the initial terms is also important. In Paul Hoffman 's biography of Paul Erdős , The man who loved only numbers , the Wilf sequence is cited but with the initial terms switched. The resulting sequence appears primefree for the first hundred terms or so, but term 138 is the 45-digit prime 439351292910452432574786963588089477522344721 {\displaystyle 439351292910452432574786963588089477522344721} . [ 1 ] Several other primefree sequences are known: The sequence of this type with the smallest known initial terms has
https://en.wikipedia.org/wiki/Primefree_sequence
Plymouth Routines In Multivariate Ecological Research (PRIMER) is a statistical package that is a collection of specialist univariate, multivariate, and graphical routines for analyzing species sampling data for community ecology. [ 1 ] Types of data analyzed are typically species abundance , biomass, presence/absence, and percent area cover, among others. It is primarily used in the scientific community for ecological and environmental studies. Multivariate routines include: Routines can be resource intensive due to their non-parametric and permutation-based nature. Programmed in the VB.Net environment.
https://en.wikipedia.org/wiki/Primer-E_Primer
A primer is a short, single-stranded nucleic acid used by all living organisms in the initiation of DNA synthesis . A synthetic primer may also be referred to as an oligo , short for oligonucleotide. DNA polymerase (responsible for DNA replication) enzymes are only capable of adding nucleotides to the 3’-end of an existing nucleic acid, requiring a primer be bound to the template before DNA polymerase can begin a complementary strand. [ 1 ] DNA polymerase adds nucleotides after binding to the RNA primer and synthesizes the whole strand. Later, the RNA strands must be removed accurately and replace them with DNA nucleotides forming a gap region known as a nick that is filled in using an enzyme called ligase. [ 2 ] The removal process of the RNA primer requires several enzymes, such as Fen1, Lig1, and others that work in coordination with DNA polymerase, to ensure the removal of the RNA nucleotides and the addition of DNA nucleotides. Living organisms use solely RNA primers, while laboratory techniques in biochemistry and molecular biology that require in vitro DNA synthesis (such as DNA sequencing and polymerase chain reaction ) usually use DNA primers, since they are more temperature stable. Primers can be designed in laboratory for specific reactions such as polymerase chain reaction (PCR). When designing PCR primers, there are specific measures that must be taken into consideration, like the melting temperature of the primers and the annealing temperature of the reaction itself. Moreover, the DNA binding sequence of the primer in vitro has to be specifically chosen, which is done using a method called basic local alignment search tool (BLAST) that scans the DNA and finds specific and unique regions for the primer to bind. RNA primers are used by living organisms in the initiation of synthesizing a strand of DNA . A class of enzymes called primases add a complementary RNA primer to the reading template de novo on both the leading and lagging strands . Starting from the free 3’-OH of the primer, known as the primer terminus, a DNA polymerase can extend a newly synthesized strand. The leading strand in DNA replication is synthesized in one continuous piece moving with the replication fork , requiring only an initial RNA primer to begin synthesis. In the lagging strand, the template DNA runs in the 5′→3′ direction . Since DNA polymerase cannot add bases in the 3′→5′ direction complementary to the template strand, DNA is synthesized ‘backward’ in short fragments moving away from the replication fork, known as Okazaki fragments . Unlike in the leading strand, this method results in the repeated starting and stopping of DNA synthesis, requiring multiple RNA primers. Along the DNA template, primase intersperses RNA primers that DNA polymerase uses to synthesize DNA from in the 5′→3′ direction. [ 1 ] Another example of primers being used to enable DNA synthesis is reverse transcription . Reverse transcriptase is an enzyme that uses a template strand of RNA to synthesize a complementary strand of DNA. The DNA polymerase component of reverse transcriptase requires an existing 3' end to begin synthesis. [ 1 ] After the insertion of Okazaki fragments , the RNA primers are removed (the mechanism of removal differs between prokaryotes and eukaryotes ) and replaced with new deoxyribonucleotides that fill the gaps where the RNA primer was present. DNA ligase then joins the fragmented strands together, completing the synthesis of the lagging strand. [ 1 ] In prokaryotes, DNA polymerase I synthesizes the Okazaki fragment until it reaches the previous RNA primer. Then the enzyme simultaneously acts as a 5′→3′ exonuclease , removing primer ribonucleotides in front and adding deoxyribonucleotides behind. Both the activities of polymerization and excision of the RNA primer occur in the 5′→3′ direction,  and polymerase I can do these activities simultaneously; this is known as “Nick Translation”. [ 3 ] Nick translation refers to the synchronized activity of polymerase I in removing the RNA primer and adding deoxyribonucleotides . Later, a gap between the strands is formed called a nick, which is sealed using a DNA ligase . In eukaryotes the removal of RNA primers in the lagging strand is essential for the completion of replication. Thus, as the lagging strand being synthesized by DNA polymerase δ in 5′→3′ direction, Okazaki fragments are formed, which are discontinuous strands of DNA. Then, when the DNA polymerase reaches to the 5’ end of the RNA primer from the previous Okazaki fragment, it displaces the 5′ end of the primer into a single-stranded RNA flap which is removed by nuclease cleavage. Cleavage of the RNA flaps involves three methods of primer removal. [ 4 ] The first possibility of primer removal is by creating a short flap that is directly removed by flap structure-specific endonuclease 1 (FEN-1), which cleaves the 5’ overhanging flap. This method is known as the short flap pathway of RNA primer removal. [ 5 ] The second way to cleave a RNA primer is by degrading the RNA strand using a RNase , in eukaryotes it’s known as the RNase H2. This enzyme degrades most of the annealed RNA primer, except the nucleotides close to the 5’ end of the primer. Thus, the remaining nucleotides are displayed into a flap that is cleaved off using FEN-1. The last possible method of removing RNA primer is known as the long flap pathway. [ 5 ] In this pathway several enzymes are recruited to elongate the RNA primer and then cleave it off. The flaps are elongated by a 5’ to 3’ helicase , known as Pif1 . After the addition of nucleotides to the flap by Pif1, the long flap is stabilized by the replication protein A (RPA). The RPA-bound DNA inhibits the activity or recruitment of FEN1, as a result another nuclease must be recruited to cleave the flap. [ 4 ] This second nuclease is DNA2 nuclease , which has a helicase-nuclease activity, that cleaves the long flap of RNA primer, which then leaves behind a couple of nucleotides that are cleaved by FEN1. At the end, when all the RNA primers have been removed, nicks form between the Okazaki fragments that are filled-in with deoxyribonucleotides using an enzyme known as ligase1 , through a process called ligation . Synthetic primers, sometimes known as oligos, are chemically synthesized oligonucleotides , usually of DNA, which can be customized to anneal to a specific site on the template DNA. In solution, the primer spontaneously hybridizes with the template through Watson-Crick base pairing before being extended by DNA polymerase. The ability to create and customize synthetic primers has proven an invaluable tool necessary to a variety of molecular biological approaches involving the analysis of DNA. Both the Sanger chain termination method and the “ Next-Gen ” method of DNA sequencing require primers to initiate the reaction. [ 1 ] The polymerase chain reaction (PCR) uses a pair of custom primers to direct DNA elongation toward each other at opposite ends of the sequence being amplified. These primers are typically between 18 and 24 bases in length and must code for only the specific upstream and downstream sites of the sequence being amplified. A primer that can bind to multiple regions along the DNA will amplify them all, eliminating the purpose of PCR. [ 1 ] A few criteria must be brought into consideration when designing a pair of PCR primers. Pairs of primers should have similar melting temperatures since annealing during PCR occurs for both strands simultaneously, and this shared melting temperature must not be either too much higher or lower than the reaction's annealing temperature . A primer with a T m (melting temperature) too much higher than the reaction's annealing temperature may mishybridize and extend at an incorrect location along the DNA sequence. A T m significantly lower than the annealing temperature may fail to anneal and extend at all. Additionally, primer sequences need to be chosen to uniquely select for a region of DNA, avoiding the possibility of hybridization to a similar sequence nearby. A commonly used method for selecting a primer site is BLAST search, whereby all the possible regions to which a primer may bind can be seen. Both the nucleotide sequence as well as the primer itself can be BLAST searched. The free NCBI tool Primer-BLAST integrates primer design and BLAST search into one application, [ 6 ] as do commercial software products such as ePrime and Beacon Designer . Computer simulations of theoretical PCR results ( Electronic PCR ) may be performed to assist in primer design by giving melting and annealing temperatures, etc. [ 7 ] As of 2014, many online tools are freely available for primer design, some of which focus on specific applications of PCR. Primers with high specificity for a subset of DNA templates in the presence of many similar variants can be designed using by some software (e.g. DECIPHER [ 8 ] ) or be developed independently for a specific group of animals. [ 9 ] Selecting a specific region of DNA for primer binding requires some additional considerations. Regions high in mononucleotide and dinucleotide repeats should be avoided, as loop formation can occur and contribute to mishybridization. Primers should not easily anneal with other primers in the mixture; this phenomenon can lead to the production of 'primer dimer' products contaminating the end solution. Primers should also not anneal strongly to themselves, as internal hairpins and loops could hinder the annealing with the template DNA. When designing primers, additional nucleotide bases can be added to the back ends of each primer, resulting in a customized cap sequence on each end of the amplified region. One application for this practice is for use in TA cloning , a special subcloning technique similar to PCR, where efficiency can be increased by adding AG tails to the 5′ and the 3′ ends. [ 10 ] Some situations may call for the use of degenerate primers. These are mixtures of primers that are similar, but not identical. These may be convenient when amplifying the same gene from different organisms , as the sequences are probably similar but not identical. This technique is useful because the genetic code itself is degenerate , meaning several different codons can code for the same amino acid . This allows different organisms to have a significantly different genetic sequence that code for a highly similar protein. For this reason, degenerate primers are also used when primer design is based on protein sequence , as the specific sequence of codons are not known. Therefore, primer sequence corresponding to the amino acid isoleucine might be "ATH", where A stands for adenine , T for thymine , and H for adenine , thymine , or cytosine , according to the genetic code for each codon , using the IUPAC symbols for degenerate bases . Degenerate primers may not perfectly hybridize with a target sequence, which can greatly reduce the specificity of the PCR amplification. Degenerate primers are widely used and extremely useful in the field of microbial ecology . They allow for the amplification of genes from thus far uncultivated microorganisms or allow the recovery of genes from organisms where genomic information is not available. Usually, degenerate primers are designed by aligning gene sequencing found in GenBank . Differences among sequences are accounted for by using IUPAC degeneracies for individual bases. PCR primers are then synthesized as a mixture of primers corresponding to all permutations of the codon sequence.
https://en.wikipedia.org/wiki/Primer_(molecular_biology)
A primer dimer ( PD ) is a potential by-product in the polymerase chain reaction (PCR), a common biotechnological method. As its name implies, a PD consists of two primer molecules that have attached ( hybridized ) to each other because of strings of complementary bases in the primers. As a result, the DNA polymerase amplifies the PD, leading to competition for PCR reagents, thus potentially inhibiting amplification of the DNA sequence targeted for PCR amplification. In quantitative PCR , PDs may interfere with accurate quantification. A primer dimer is formed and amplified in three steps. In the first step, two primers anneal at their respective 3' ends (step I in the figure). If this construct is stable enough, the DNA polymerase will bind and extend the primers according to the complementary sequence (step II in the figure). An important factor contributing to the stability of the construct in step I is a high GC-content at the 3' ends and length of the overlap. The third step occurs in the next cycle, when a single strand of the product of step II is used as a template to which fresh primers anneal leading to synthesis of more PD product. [ 1 ] Primer dimers may be visible after gel electrophoresis of the PCR product. PDs in ethidium bromide -stained gels are typically seen as a 30-50 base-pair (bp) band or smear of moderate to high intensity and distinguishable from the band of the target sequence, which is typically longer than 50 bp. In quantitative PCR , PDs may be detected by melting curve analysis with intercalating dyes, such as SYBR Green I , a nonspecific dye for detection of double-stranded DNA. Because they usually consist of short sequences, the PDs denature at a lower temperature than the target sequence and hence can be distinguished by their melting-curve characteristics. One approach to prevent PDs consists of physical-chemical optimization of the PCR system, i.e. changing the concentrations of primers, magnesium chloride , nucleotides , ionic strength and temperature of the reaction. This method is somewhat limited by the physical-chemical characteristics that also determine the efficiency of amplification of the target sequence in the PCR. Therefore, reducing PDs formation may also result in reduced PCR efficiency. To overcome this limitation, other methods aim to reduce the formation of PDs only, including primer design, and use of different PCR enzyme systems or reagents. [ citation needed ] Primer-design software uses algorithms that check for the potential of DNA secondary structure formation and annealing of primers to itself or within primer pairs. Physical parameters that are taken into account by the software are potential self-complementarity and GC content of the primers; similar melting temperatures of the primers; and absence of secondary structures, such as stem-loops , in the DNA target sequence. [ 2 ] Because primers are designed to have low complementarity to each other, they may anneal (step I in the figure) only at low temperature, e.g. room temperature, such as during the preparation of the reaction mixture. Although DNA polymerases used in PCR are most active around 70 °C, they have some polymerizing activity also at lower temperatures, which can cause DNA synthesis from primers after annealing to each other. [ 3 ] Several methods have been developed to prevent PDs formation until the reaction reaches the working temperature of 60-70 °C, and these include initial inhibition of the DNA polymerase, or physical separation of reaction components reaction [ clarification needed ] until the reaction mixture reaches the higher temperatures. These methods are referred to as hot-start PCR . Wax : in this method the enzyme is spatially separated from the reaction mixture by wax that melts when the reaction reaches high temperature. [ 4 ] Slow release of magnesium : DNA polymerase requires magnesium ions for activity, [ 5 ] so the magnesium is chemically separated from the reaction by binding to a chemical compound, and is released into the solution only at high temperature [ 6 ] Non-covalent binding of inhibitor : in this method a peptide , antibody [ 7 ] or aptamer [ 8 ] are non-covalently bound to the enzyme at low temperature and inhibit its activity. After an incubation of 1–5 minutes at 95 °C, the inhibitor is released and the reaction starts. Cold-sensitive Taq polymerase : is a modified DNA polymerase with almost no activity at low temperature. [ 9 ] Chemical modification : in this method a small molecule is covalently bound to the side chain of an amino acid in the active site of the DNA polymerase. The small molecule is released from the enzyme by incubation of the reaction mixture for 10–15 minutes at 95 °C. Once the small molecule is released, the enzyme is activated. [ 10 ] Another approach to prevent or reduce PD formation is by modifying the primers so that annealing with themselves or each other does not cause extension. HANDS ( H omo-Tag A ssisted N on- D imer S ystem [ 11 ] ): a nucleotide tail, complementary to the 3' end of the primer is added to the 5' end of the primer. Because of the close proximity of the 5' tail it anneals to the 3' end of the primer. The result is a stem-loop primer that excludes annealing involving shorter overlaps, but permits annealing of the primer to its fully complementary sequence in the target. Chimeric primers : some DNA bases in the primer are replaced with RNA bases, creating a chimeric sequence . The melting temperature of a chimeric sequence with another chimeric sequence is lower than that of chimeric sequence with DNA. This difference enables setting the annealing temperature such that the primer will anneal to its target sequence, but not to other chimeric primers. [ 12 ] Blocked-cleavable primers : a method known as RNase H-dependent PCR (rhPCR), [ 13 ] utilizes a thermostable RNase HII to remove a blocking group from the PCR primers at high temperature. This RNase HII enzyme displays almost no activity at low temperature, making the removal of the block only occur at high temperature. The enzyme also possess inherent primer:template mismatch discrimination, resulting in additional selection against primer-dimers. Self-Avoiding molecular recognition systems :also known as SAMRS, [ 14 ] eliminating primer dimers by introducing nucleotide analogues T*, A*, G* and C* into the primer. The SAMRS DNA could bind to natural DNA, but not to other members of the same SAMRS species. For example, T* could bind to A but not A*, and A* could bind to T but not T*. Thus, through careful design, [ 15 ] primers build from SAMRS could avoid primer-primer interactions and allowing sensitive SNP detection as well as multiplex PCR. While the methods above are designed to reduce PD formation, another approach aims to minimize signal generated from PDs in quantitative PCR . This approach is useful as long as there are few PDs formed and their inhibitory effect on product accumulation is minor. Four steps PCR : used when working with nonspecific dyes, such as SYBR Green I. It is based on the different length, and hence, different melting temperature of the PDs and the target sequence. In this method the signal is acquired below the melting temperature of the target sequence, but above the melting temperature of the PDs. [ 16 ] Sequence-specific probes : TaqMan and molecular beacon probes generate signal only in the presence of their target (complementary) sequence, and this enhanced specificity precludes signal acquisition (but not possible inhibitory effects on product accumulation) from PDs.
https://en.wikipedia.org/wiki/Primer_dimer
Primer extension is a technique whereby the 5' ends of RNA can be mapped - that is, they can be sequenced and properly identified. Primer extension can be used to determine the start site of transcription (the end site cannot be determined by this method) by which its sequence is known. This technique requires a radiolabelled primer (usually 20 - 50 nucleotides in length) which is complementary to a region near the 3' end of the mRNA. The primer is allowed to anneal to the RNA and reverse transcriptase is used to synthesize cDNA from the RNA until it reaches the 5' end of the RNA. By denaturing the hybrid and using the extended primer cDNA as a marker on an electrophoretic gel, it is possible to determine the transcriptional start site . It is usually done so by comparing its location on the gel with the DNA sequence (e.g. Sanger sequencing ), preferably by using the same primer on the DNA template strand. The exact nucleotide by which the transcription starts at can be pinpointed by matching the labelled extended primer with the marker nucleotide, who are both sharing the same migration distance on the gel. Primer extension offers an alternative to a nuclease protection assay (S1 nuclease mapping) for quantifying and mapping RNA transcripts. The hybridization probe for primer extension is a synthesized oligonucleotide, whereas S1 mapping requires isolation of a DNA fragment. Both methods provide information where a mRNA starts and provide an estimate of the concentration of a transcript by the intensity of the transcript band on the resulting autoradiograph. Unlike S1 mapping, however, primer extension can only be used to locate the 5’-end of an mRNA transcript because the DNA synthesis required for the assay relies on reverse transcriptase (only polymerizes in the 5’ → 3’ direction). Primer extension is unaffected by splice sites and is thus preferable in situations where intervening splice sites prevent S1 mapping. Finally, primer extension is more accurate than S1 mapping because the S1 nuclease used in S1 mapping can “nibble off” ends of the RNA-DNA hybrid or fail to degrade the single-stranded regions completely, making a transcript either appear shorter or longer. This molecular or cell biology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Primer_extension
Primer walking is a technique used to clone a gene (e.g., disease gene) from its known closest markers (e.g., known gene). As a result, it is employed in cloning and sequencing efforts in plants, fungi, and mammals with minor alterations. This technique, also known as "directed sequencing," employs a series of Sanger sequencing reactions to either confirm the reference sequence of a known plasmid or PCR product based on the reference sequence (sequence confirmation service) or to discover the unknown sequence of a full plasmid or PCR product by designing primers to sequence overlapping sections (sequence discovery service). [ 1 ] Primer walking is a method to determine the sequence of DNA up to the 1.3–7.0 kb range whereas chromosome walking is used to produce the clones of already known sequences of the gene. [ 2 ] Too long fragments cannot be sequenced in a single sequence read using the chain termination method . This method works by dividing the long sequence into several consecutive short ones. The DNA of interest may be a plasmid insert, a PCR product or a fragment representing a gap when sequencing a genome. The term "primer walking" is used where the main aim is to sequence the genome. The term "chromosome walking" is used instead when the sequence is known but there is no clone of a gene. For example, the gene for a disease may be located near a specific marker such as an RFLP on the sequence. [ 3 ] Chromosome walking is a technique used to clone a gene (e.g., disease gene) from its known closest markers (e.g., known gene) and hence is used in moderate modifications in cloning and sequencing projects in plants, fungi, and animals. To put it another way, it's utilized to find, isolate, and clone a specific sequence existing near the gene to be mapped. Libraries of large fragments, mainly bacterial artificial chromosome libraries, are mostly used in genomic projects. To identify the desired colony and to select a particular clone the library is screened first with a desired probe. After screening, the clone is overlapped with the probe and overlapping fragments are mapped. These fragments are then used as a new probe (short DNA fragments obtained from the 3′ or 5′ ends of clones) to identify other clones. A library approximately consists of 96 clones and each clone contains a different insert. Probe one identifies λ1 and λ2 as it overlaps them . Probe two derived from λ2 clones is used to identify λ3, and so on. Orientation of the clones is determined by restriction mapping of the clones. Thus, new chromosomal regions present in the vicinity of a gene could be identified. Chromosome walking is time-consuming, and chromosome landing is the method of choice for gene identification. This method necessitates the discovery of a marker that is firmly related to the mutant locus. [ 4 ] The fragment is first sequenced as if it were a shorter fragment. Sequencing is performed from each end using either universal primers or specifically designed ones. This should identify the first 1000 or so bases. In order to completely sequence the region of interest, design and synthesis of new primers (complementary to the final 20 bases of the known sequence) is necessary to obtain contiguous sequence information. [ 5 ] Primer walking is an example of directed sequencing because the primer is designed from a known region of DNA to guide the sequencing in a specific direction. In contrast to directed sequencing, shotgun sequencing of DNA is a more rapid sequencing strategy. [ 6 ] There is a technique from the "old time" of genome sequencing. The underlying method for sequencing is the Sanger chain termination method which can have read lengths between 100 and 1000 basepairs (depending on the instruments used). This means you have to break down longer DNA molecules, clone and subsequently sequence them. There are two methods possible. [ 7 ] The first is called chromosome (or primer) walking and starts with sequencing the first piece. The next (contiguous) piece of the sequence is then sequenced using a primer which is complementary to the end of the first sequence read and so on. This technique doesn't require much assembling, but you need a lot of primers and it is relatively slow. [ 8 ] To overcome this problem the shotgun sequencing method was developed. Here the DNA is broken into different pieces (not all broken at the same place), cloned and sequenced with primers specific for the vector used for cloning. This leads to overlapping sequences which then have to be assembled into one sequence on the computer. This method allows for the parallelization of the sequencing (you can prepare a lot of sequencing reactions at the same time and run them) which makes the process much faster and also avoids the need for sequence specific primers. The challenge is to organize sequences into their order, as overlaps are not as clear here. To resolve this problem, a first draft is made and then critical regions are resequenced using other techniques such as primer walking. [ 9 ] The overall process is as follows: A primer that matches the beginning of the DNA to sequence is used to synthesize a short DNA strand adjacent to the unknown sequence, starting with the primer (see PCR ). The new short DNA strand is sequenced using the chain termination method. The end of the sequenced strand is used as a primer for the next part of the long DNA sequence, hence the term "walking". The method can be used to sequence entire chromosomes (hence "chromosome walking"). [ 10 ] Primer walking was also the basis for the development of shotgun sequencing , which uses random primers instead of specifically chosen ones.
https://en.wikipedia.org/wiki/Primer_walking
Priming is the first contact that antigen-specific T helper cell precursors have with an antigen . It is essential to the T helper cells' subsequent interaction with B cells to produce antibodies . [ 1 ] Priming of antigen-specific naive lymphocytes occurs when antigen is presented to them in immunogenic form (capable of inducing an immune response). Subsequently, the primed cells will differentiate either into effector cells or into memory cells that can mount stronger and faster response to second and upcoming immune challenges. [ 2 ] T and B cell priming occurs in the secondary lymphoid organs (lymph nodes and spleen). Priming of naïve T cells requires dendritic cell antigen presentation . Priming of naive CD8 T cells generates cytotoxic T cells capable of directly killing pathogen-infected cells. CD4 cells develop into a diverse array of effector cell types depending on the nature of the signals they receive during priming. CD4 effector activity can include cytotoxicity , but more frequently it involves the secretion of a set of cytokines that directs the target cell to make a particular response. This activation of naive T cell is controlled by a variety of signals: recognition of antigen in the form of a peptide: MHC complex on the surface of a specialized antigen-presenting cell delivers signal 1; interaction of co-stimulatory molecules on antigen-presenting cells with receptors on T cells delivers signal 2 (one notable example includes a B7 ligand complex on antigen-presenting cells binding to the CD28 receptor on T cells); and cytokines that control differentiation into different types of effector cells deliver signal 3. [ 2 ] Cross-priming refers to the stimulation of antigen-specific CD8+ cytotoxic T lymphocytes (CTLs) by dendritic cell presenting an antigen acquired from the outside of the cell. Cross-priming is also called immunogenic cross-presentation . This mechanism is vital for priming of CTLs against viruses and tumours . [ 3 ] Immune priming is a memory-like phenomenon described in invertebrate taxa of animals, first described by Hans G. Boman and colleagues using Drosophila fruit flies. [ 5 ] In vertebrates , immune memory is based on adaptive immune cells called B and T lymphocytes, which provide an enhanced and faster immune response when challenged with the same pathogen for a second time. It is evolutionarily advantageous for an organism to produce a rapid immune response to common pathogens it is likely to be exposed to again. In the 1940s-1960s, the budding field of immunology assumed that invertebrates did not have memory-like immune functions as they do not produce antibodies needed for adaptive immunity . In 1972, Boman and colleagues' experiments overturned this assumption, showing that fruit flies could be "vaccinated" against a repeat infection by the same bacteria if they were first exposed to a freeze-thawed pathogen. Flies previously exposed to freeze-thawed bacteria cleared subsequent infection better than naive flies. [ 5 ] Since then, evidence supporting innate memory-like functions have been found across model invertebrates, including insects and crustaceans. Results of immune priming research commonly find that mechanism conferring defense against a given pathogen is dependent on the kind of insect species and microbe used for given experiment. That could be due to host-pathogen coevolution . For every species is convenient to develop a specialised defense against a pathogen (e.g. bacterial strain) that it encounters the most. [ 6 ] In arthropod model, the red flour beetle Tribolium castaneum , it has been shown that the route of infection ( cuticular , septic or oral) is important for the defence mechanism generation. [ 7 ] Innate immunity in insects is based on non-cellular mechanisms, including production of antimicrobial peptides (AMPs), reactive oxygen species (ROS) or activation of the prophenol oxidase cascade . Cellular parts of insect innate immunity are hemocytes , which can eliminate pathogens by nodulation, encapsulation or phagocytosis . [ 8 ] The innate response during immune priming differs based on the experimental setup, but generally it involves enhancement of humoral innate immune mechanisms and increased levels of hemocytes. There are two hypothetical scenarios of immune induction, on which immune priming mechanism could be based. [ 7 ] [ 9 ] The first mechanism is induction of long-lasting defences, such as circulating immune molecules, by the priming antigens in the host body, which remain until the secondary encounter. The second mechanism describes a drop after the initial priming response, but a stronger defence upon a secondary challenge. The most probable scenario is the combination of these two mechanisms. [ 7 ] Trans-generational immune priming (TGIP) describes the transfer of parental immunological experience to its progeny , which may help the survival of the offspring when challenged with the same pathogen. Similar mechanism of offspring protection against pathogens has been studied for a very long time in vertebrates, where the transfer of maternal antibodies helps the newborns immune system fight an infection before its immune system can function properly on its own. In the last two decades TGIP in invertebrates was heavily studied. Evidence supporting TGIP were found in all colleopteran , crustacean , hymenopteran , orthopteran and mollusk species, but in some other species the results still remain contradictory. [ 10 ] The experimental outcome could be influenced by the procedure used for particular investigation. Some of these parameters include the infection procedure, the sex of the offspring and the parent and the developmental stage. [ 10 ]
https://en.wikipedia.org/wiki/Priming_(immunology)
Priming or a "priming effect" is said to occur when something that is added to soil or compost affects the rate of decomposition occurring on the soil organic matter (SOM), either positively or negatively. Organic matter is made up mostly of carbon and nitrogen , so adding a substrate containing certain ratios of these nutrients to soil may affect the microbes that are mineralizing SOM. Fertilizers , plant litter , detritus , and carbohydrate exudates from living roots , can potentially positively or negatively prime SOM decomposition. [ 1 ] [ 2 ] [ 3 ] This soil science –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Priming_(microbiology)
In phylogenetics , a primitive (or ancestral ) character, trait, or feature of a lineage or taxon is one that is inherited from the common ancestor of a clade (or clade group) and has undergone little change since. Conversely, a trait that appears within the clade group (that is, is present in any subgroup within the clade but not all) is called advanced or derived . A clade is a group of organisms that consists of a common ancestor and all its lineal descendants. A primitive trait is the original condition of that trait in the common ancestor; advanced indicates a notable change from the original condition. These terms in biology contain no judgement about the sophistication, superiority, value or adaptiveness of the named trait. "Primitive" in biology means only that the character appeared first in the common ancestor of a clade group and has been passed on largely intact to more recent members of the clade. "Advanced" means the character has evolved within a later subgroup of the clade. Phylogenetics is utilized to determine evolutionary relationships and relatedness, to ultimately depict accurate evolutionary lineages. Evolutionary relatedness between living species can be connected by descent from common ancestry. [ 1 ] These evolutionary lineages can thereby be portrayed through a phylogenetic tree, or cladogram, where varying relatedness amongst species is evidently depicted. Through this tree, organisms can be categorized by divergence from the common ancestor, and primitive characters, to clades of organisms with shared derived character states. Furthermore, cladograms allow researchers to view the changes and evolutionary alterations occurring in a species over time as they move from primitive characters to varying derived character states. [ 2 ] Cladograms are important for scientists as they allow them to classify and hypothesize the origin and future of organisms. Cladograms allow scientists to propose their evolutionary scenarios about the lineage from a primitive trait to a derived one. By understanding how the trait came to be, scientists can hypothesize the environment that specific organism was in and how that affected the evolutionary adaptations of the trait that came to be. [ 3 ] Other, more technical, terms for these two conditions—for example, "plesiomorphic" and "synapomorphic"—are frequently encountered; see the table below. At least three other sets of terms are synonymous with the terms "primitive" and "advanced". The technical terms are considered preferable because they are less likely to convey the sense that the trait mentioned is inferior, simpler, or less adaptive (e.g., as in non-vascular ("lower") and vascular ("higher") plants ). [ 4 ] The terms "plesiomorphy" and "apomorphy" are typically used in the technical literature: for example, when a plesiomorphic trait is shared by more than one member of a clade, the trait is called a symplesiomorphy , that is, a shared primitive trait; a shared derived trait is a synapomorphy . The amount of variation of characters can assist in depicting greater relatedness amongst species, and conversely show the lack of relatedness between species. Analysis of character variation also aids in distinguishing primitive characters from derived characters. [ 5 ] The term derived and primitive, or ancestral, is used in reference to characters and character state. In doing so, a derived character is depicted as a character procured through evolution from the previous ancestral state, and persisting due to fixation of derived alleles. Whereas, a primitive character is one that is originally present in the ancestral population. [ 5 ] Primitive characters are avoided as they depict the ancestral character state. Conversely, derived characters depict the alteration of characters from the ancestral state because selection favored organisms with that derived trait. [ 6 ] "Primitive" and "advanced" are relative terms. When a trait is called primitive, the determination is based on the perspective from which the trait is viewed. Any trait can be both primitive (ancestral) and advanced (derived) depending on the context. In the clade of vertebrates, legs are an advanced trait since it is a feature that appears in the clade. However, in the clade of tetrapods, legs are primitive since they were inherited from a common ancestor. [ 7 ] The terms "primitive" and "advanced", etc., are not properly used in referring to a species or an organism as any species or organism is a mosaic of primitive and derived traits. Using "primitive" and "advanced" may lead to "ladder thinking" (compare the Latin term scala naturae 'ladder of nature'), [ 8 ] which is the thought that all species are evolving because they are striving toward supremacy. When this form of thinking is used, humans are typically considered perfect and all other organisms are of less quality than them. [ 9 ] This can cause the misconception of one species being an ancestor to another species, when in fact both species are extant. [ 8 ] Homo sapiens , for example have large brains (a derived trait) and five fingers (a primitive trait) in their lineage. [ 10 ] [ 11 ] Species are constantly evolving, so a frog is not biologically more primitive than a human as each has been evolving continuously since each lineage split from their common ancestor.
https://en.wikipedia.org/wiki/Primitive_(phylogenetics)
In algebra, a primitive element of a co-algebra C (over an element g ) is an element x that satisfies where μ {\displaystyle \mu } is the co-multiplication and g is an element of C that maps to the multiplicative identity 1 of the base field under the co-unit ( g is called group-like ). If C is a bi-algebra , i.e., a co-algebra that is also an algebra (with certain compatibility conditions satisfied), then one usually takes g to be 1, the multiplicative identity of C . The bi-algebra C is said to be primitively generated if it is generated by primitive elements (as an algebra). If C is a bi-algebra, then the set of primitive elements form a Lie algebra with the usual commutator bracket [ x , y ] = x y − y x {\displaystyle [x,y]=xy-yx} ( graded commutator if C is graded). If A is a connected graded cocommutative Hopf algebra over a field of characteristic zero, then the Milnor–Moore theorem states the universal enveloping algebra of the graded Lie algebra of primitive elements of A is isomorphic to A . (This also holds under slightly weaker requirements.) This algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Primitive_element_(co-algebra)
In geochemistry , the primitive mantle (also known as the bulk silicate Earth ) is the chemical composition of the Earth's mantle during the developmental stage between core-mantle differentiation and the formation of early continental crust . The chemical composition of the primitive mantle contains characteristics of both the crust and the mantle. [ 2 ] One accepted scientific hypothesis is that the Earth was formed by accretion of material with a chondritic composition through impacts with differentiated planetesimals. During this accretionary phase, planetary differentiation separated the Earth's core , where heavy metallic siderophile elements accumulated, from the surrounding undifferentiated primitive mantle. [ 3 ] Further differentiation would take place later, creating the different chemical reservoirs of crust and mantle material, with incompatible elements accumulating in the crust. [ 4 ] Today, differentiation still continues in the upper mantle , resulting in two types of mantle reservoirs: those depleted in lithophile elements ( depleted reservoirs), and those composed of "fresh" undifferentiated mantle material ( enriched or primitive reservoirs) . [ 5 ] Volcanic rocks from hotspot areas often have a primitive composition, and because the magma at hotspots is supposed to have been taken to the surface from the deepest regions of the mantle by mantle plumes , geochemists assume there must be a relatively closed and very undifferentiated primitive reservoir somewhere in the lower mantle . [ 6 ] One hypothesis to describe this assumption is the existence of the D"-layer at the core-mantle boundary . [ 7 ] Although the chemical composition of the primitive mantle cannot be directly measured at its source, researchers have been able to estimate primitive mantle characteristics using a few methods. One methodology involves the analysis of chondritic meteorites that represent early Earth chemical composition and creating models using the analyzed chemical characteristics and assumptions describing inner-Earth dynamics. This approach is based on the assumption that early planetary bodies in the Solar System formed under similar conditions, giving them comparable chemical compositions. [ 8 ] The more direct methodology is to observe trends in the chemical makeup of upper mantle peridotites and interpret the hypothetical composition of the primitive mantle based on these trends. This is done by matching the peridotite compositional trends to the distribution of refractory lithophile elements (which are not affected by core-mantle differentiation) in chondritic meteorites. Both methods have limitations based on the assumptions made about inner-earth, as well as statistical uncertainties in the models used to quantify the data. [ 2 ] The two approaches detailed above yield weight percentages that follow the same general trends when compared to the depleted (or homogeneous) mantle: the primitive mantle has significantly higher concentrations of SiO 2, Al 2 O 3 , Na 2 O, and CaO, and significantly lower concentrations of MgO. More importantly, both approaches show that the primitive mantle has much greater concentrations of refractory lithophile elements (e.g Al, Ba, Be, Ca, Hf, Nb, Sc, Sr, Ta, Th, Ti, U, Y, Zr, and rare earth elements) . [ 9 ] The exact concentrations of these compounds and refractory lithophile elements depends on the estimation method used. Methods using peridotite analysis yield a much smaller primitive mantle weight percentage for SiO 2 and significantly larger primitive mantle weight percentages for MgO and Al 2 O 3 than those estimated using direct chondritic meteorite analysis. The estimated concentrations of refractory lithophile elements obtained from the two methods vary as well, usually 0.1-5 ppm. [ 10 ]
https://en.wikipedia.org/wiki/Primitive_mantle
In mathematics , logic , philosophy , and formal systems , a primitive notion is a concept that is not defined in terms of previously-defined concepts. It is often motivated informally, usually by an appeal to intuition or taken to be self-evident . In an axiomatic theory , relations between primitive notions are restricted by axioms . [ 1 ] Some authors refer to the latter as "defining" primitive notions by one or more axioms, but this can be misleading. Formal theories cannot dispense with primitive notions, under pain of infinite regress (per the regress problem ). For example, in contemporary geometry, point , line , and contains are some primitive notions. Alfred Tarski explained the role of primitive notions as follows: [ 2 ] An inevitable regress to primitive notions in the theory of knowledge was explained by Gilbert de B. Robinson : The necessity for primitive notions is illustrated in several axiomatic foundations in mathematics: In his book on philosophy of mathematics , The Principles of Mathematics Bertrand Russell used the following notions: for class-calculus ( set theory ), he used relations , taking set membership as a primitive notion. To establish sets, he also establishes propositional functions as primitive, as well as the phrase "such that" as used in set builder notation . (pp 18,9) Regarding relations, Russell takes as primitive notions the converse relation and complementary relation of a given xRy . Furthermore, logical products of relations and relative products of relations are primitive. (p 25) As for denotation of objects by description, Russell acknowledges that a primitive notion is involved. (p 27) The thesis of Russell’s book is "Pure mathematics uses only a few notions, and these are logical constants." (p xxi)
https://en.wikipedia.org/wiki/Primitive_notion
In algebra , the content of a nonzero polynomial with integer coefficients (or, more generally, with coefficients in a unique factorization domain ) is the greatest common divisor of its coefficients. The primitive part of such a polynomial is the quotient of the polynomial by its content. Thus a polynomial is the product of its primitive part and its content, and this factorization is unique up to the multiplication of the content by a unit of the ring of the coefficients (and the multiplication of the primitive part by the inverse of the unit). A polynomial is primitive if its content equals 1. Thus the primitive part of a polynomial is a primitive polynomial. Gauss's lemma for polynomials states that the product of primitive polynomials (with coefficients in the same unique factorization domain) also is primitive. This implies that the content and the primitive part of the product of two polynomials are, respectively, the product of the contents and the product of the primitive parts. As the computation of greatest common divisors is generally much easier than polynomial factorization , the first step of a polynomial factorization algorithm is generally the computation of its primitive part–content factorization (see Factorization of polynomials § Primitive part–content factorization ). Then the factorization problem is reduced to factorize separately the content and the primitive part. Content and primitive part may be generalized to polynomials over the rational numbers , and, more generally, to polynomials over the field of fractions of a unique factorization domain. This makes essentially equivalent the problems of computing greatest common divisors and factorization of polynomials over the integers and of polynomials over the rational numbers. For a polynomial with integer coefficients, the content may be either the greatest common divisor of the coefficients or its additive inverse . The choice is arbitrary, and may depend on a further convention, which is commonly that the leading coefficient of the primitive part be positive. For example, the content of − 12 x 3 + 30 x − 20 {\displaystyle -12x^{3}+30x-20} may be either 2 or −2, since 2 is the greatest common divisor of −12, 30, and −20. If one chooses 2 as the content, the primitive part of this polynomial is and thus the primitive-part-content factorization is For aesthetic reasons, one often prefers choosing a negative content, here −2, giving the primitive-part-content factorization In the remainder of this article, we consider polynomials over a unique factorization domain R , which can typically be the ring of integers , or a polynomial ring over a field . In R , greatest common divisors are well defined, and are unique up to multiplication by a unit of R . The content c ( P ) of a polynomial P with coefficients in R is the greatest common divisor of its coefficients, and, as such, is defined up to multiplication by a unit. The primitive part pp( P ) of P is the quotient P / c ( P ) of P by its content; it is a polynomial with coefficients in R , which is unique up to multiplication by a unit. If the content is changed by multiplication by a unit u , then the primitive part must be changed by dividing it by the same unit, in order to keep the equality P = c ( P ) pp ⁡ ( P ) , {\displaystyle P=c(P)\operatorname {pp} (P),} which is called the primitive-part-content factorization of P . The main properties of the content and the primitive part are results of Gauss's lemma , which asserts that the product of two primitive polynomials is primitive, where a polynomial is primitive if 1 is the greatest common divisor of its coefficients. This implies: The last property implies that the computation of the primitive-part-content factorization of a polynomial reduces the computation of its complete factorization to the separate factorization of the content and the primitive part. This is generally interesting, because the computation of the prime-part-content factorization involves only greatest common divisor computation in R , which is usually much easier than factorization. The primitive-part-content factorization may be extended to polynomials with rational coefficients as follows. Given a polynomial P with rational coefficients, by rewriting its coefficients with the same common denominator d , one may rewrite P as where Q is a polynomial with integer coefficients. The content of P is the quotient by d of the content of Q , that is and the primitive part of P is the primitive part of Q : It is easy to show that this definition does not depend on the choice of the common denominator, and that the primitive-part-content factorization remains valid: This shows that every polynomial over the rationals is associated with a unique primitive polynomial over the integers, and that the Euclidean algorithm allows the computation of this primitive polynomial. A consequence is that factoring polynomials over the rationals is equivalent to factoring primitive polynomials over the integers. As polynomials with coefficients in a field are more common than polynomials with integer coefficients, it may seem that this equivalence may be used for factoring polynomials with integer coefficients. In fact, the truth is exactly the opposite: every known efficient algorithm for factoring polynomials with rational coefficients uses this equivalence for reducing the problem modulo some prime number p (see Factorization of polynomials ). This equivalence is also used for computing greatest common divisors of polynomials, although the Euclidean algorithm is defined for polynomials with rational coefficients. In fact, in this case, the Euclidean algorithm requires one to compute the reduced form of many fractions, and this makes the Euclidean algorithm less efficient than algorithms which work only with polynomials over the integers (see Polynomial greatest common divisor ). The results of the preceding section remain valid if the ring of integers and the field of rationals are respectively replaced by any unique factorization domain R and its field of fractions K . This is typically used for factoring multivariate polynomials , and for proving that a polynomial ring over a unique factorization domain is also a unique factorization domain. A polynomial ring over a field is a unique factorization domain. The same is true for a polynomial ring over a unique factorization domain. To prove this, it suffices to consider the univariate case, as the general case may be deduced by induction on the number of indeterminates. The unique factorization property is a direct consequence of Euclid's lemma : If an irreducible element divides a product, then it divides one of the factors. For univariate polynomials over a field, this results from Bézout's identity , which itself results from the Euclidean algorithm . So, let R be a unique factorization domain, which is not a field, and R [ X ] the univariate polynomial ring over R . An irreducible element r in R [ X ] is either an irreducible element in R or an irreducible primitive polynomial. If r is in R and divides a product P 1 P 2 {\displaystyle P_{1}P_{2}} of two polynomials, then it divides the content c ( P 1 P 2 ) = c ( P 1 ) c ( P 2 ) . {\displaystyle c(P_{1}P_{2})=c(P_{1})c(P_{2}).} Thus, by Euclid's lemma in R , it divides one of the contents, and therefore one of the polynomials. If r is not R , it is a primitive polynomial (because it is irreducible). Then Euclid's lemma in R [ X ] results immediately from Euclid's lemma in K [ X ] , where K is the field of fractions of R . For factoring a multivariate polynomial over a field or over the integers, one may consider it as a univariate polynomial with coefficients in a polynomial ring with one less indeterminate. Then the factorization is reduced to factorizing separately the primitive part and the content. As the content has one less indeterminate, it may be factorized by applying the method recursively . For factorizing the primitive part, the standard method consists of substituting integers to the indeterminates of the coefficients in a way that does not change the degree in the remaining variable, factorizing the resulting univariate polynomial, and lifting the result to a factorization of the primitive part.
https://en.wikipedia.org/wiki/Primitive_part_and_content
In finite field theory , a branch of mathematics , a primitive polynomial is the minimal polynomial of a primitive element of the finite field GF( p m ) . This means that a polynomial F ( X ) of degree m with coefficients in GF( p ) = Z / p Z is a primitive polynomial if it is monic and has a root α in GF( p m ) such that { 0 , 1 , α , α 2 , α 3 , … α p m − 2 } {\displaystyle \{0,1,\alpha ,\alpha ^{2},\alpha ^{3},\ldots \alpha ^{p^{m}-2}\}} is the entire field GF( p m ) . This implies that α is a primitive ( p m − 1 )-root of unity in GF( p m ) . Over GF(3) the polynomial x 2 + 1 is irreducible but not primitive because it divides x 4 − 1 : its roots generate a cyclic group of order 4, while the multiplicative group of GF(3 2 ) is a cyclic group of order 8. The polynomial x 2 + 2 x + 2 , on the other hand, is primitive. Denote one of its roots by α . Then, because the natural numbers less than and relatively prime to 3 2 − 1 = 8 are 1, 3, 5, and 7, the four primitive roots in GF(3 2 ) are α , α 3 = 2 α + 1 , α 5 = 2 α , and α 7 = α + 2 . The primitive roots α and α 3 are algebraically conjugate. Indeed x 2 + 2 x + 2 = ( x − α ) ( x − (2 α + 1)) . The remaining primitive roots α 5 and α 7 = ( α 5 ) 3 are also algebraically conjugate and produce the second primitive polynomial: x 2 + x + 2 = ( x − 2 α ) ( x − ( α + 2)) . For degree 3, GF(3 3 ) has φ (3 3 − 1) = φ (26) = 12 primitive elements. As each primitive polynomial of degree 3 has three roots, all necessarily primitive, there are 12 / 3 = 4 primitive polynomials of degree 3. One primitive polynomial is x 3 + 2 x + 1 . Denoting one of its roots by γ , the algebraically conjugate elements are γ 3 and γ 9 . The other primitive polynomials are associated with algebraically conjugate sets built on other primitive elements γ r with r relatively prime to 26: Primitive polynomials can be used to represent the elements of a finite field . If α in GF( p m ) is a root of a primitive polynomial F ( x ), then the nonzero elements of GF( p m ) are represented as successive powers of α : This allows an economical representation in a computer of the nonzero elements of the finite field, by representing an element by the corresponding exponent of α . {\displaystyle \alpha .} This representation makes multiplication easy, as it corresponds to addition of exponents modulo p m − 1. {\displaystyle p^{m}-1.} Primitive polynomials over GF(2), the field with two elements, can be used for pseudorandom bit generation . In fact, every linear-feedback shift register with maximum cycle length (which is 2 n − 1 , where n is the length of the linear-feedback shift register) may be built from a primitive polynomial. [ 2 ] In general, for a primitive polynomial of degree m over GF(2), this process will generate 2 m − 1 pseudo-random bits before repeating the same sequence. The cyclic redundancy check (CRC) is an error-detection code that operates by interpreting the message bitstring as the coefficients of a polynomial over GF(2) and dividing it by a fixed generator polynomial also over GF(2); see Mathematics of CRC . Primitive polynomials, or multiples of them, are sometimes a good choice for generator polynomials because they can reliably detect two bit errors that occur far apart in the message bitstring, up to a distance of 2 n − 1 for a degree n primitive polynomial. A useful class of primitive polynomials is the primitive trinomials, those having only three nonzero terms: x r + x k + 1 . Their simplicity makes for particularly small and fast linear-feedback shift registers . [ 3 ] A number of results give techniques for locating and testing primitiveness of trinomials. [ 4 ] For polynomials over GF(2), where 2 r − 1 is a Mersenne prime , a polynomial of degree r is primitive if and only if it is irreducible. (Given an irreducible polynomial, it is not primitive only if the period of x is a non-trivial factor of 2 r − 1 . Primes have no non-trivial factors.) Although the Mersenne Twister pseudo-random number generator does not use a trinomial, it does take advantage of this. Richard Brent has been tabulating primitive trinomials of this form, such as x 74207281 + x 30684570 + 1 . [ 5 ] [ 6 ] This can be used to create a pseudo-random number generator of the huge period 2 74207281 − 1 ≈ 3 × 10 22 338 617 .
https://en.wikipedia.org/wiki/Primitive_polynomial_(field_theory)
Primitive recursive arithmetic ( PRA ) is a quantifier -free formalization of the natural numbers . It was first proposed by Norwegian mathematician Skolem (1923) , [ 1 ] as a formalization of his finitistic conception of the foundations of arithmetic , and it is widely agreed that all reasoning of PRA is finitistic. Many also believe that all of finitism is captured by PRA, [ 2 ] but others believe finitism can be extended to forms of recursion beyond primitive recursion, up to ε 0 , [ 3 ] which is the proof-theoretic ordinal of Peano arithmetic . [ 4 ] PRA's proof theoretic ordinal is ω ω , where ω is the smallest transfinite ordinal . PRA is sometimes called Skolem arithmetic , although that has another meaning, see Skolem arithmetic . The language of PRA can express arithmetic propositions involving natural numbers and any primitive recursive function , including the operations of addition , multiplication , and exponentiation . PRA cannot explicitly quantify over the domain of natural numbers. PRA is often taken as the basic metamathematical formal system for proof theory , in particular for consistency proofs such as Gentzen's consistency proof of first-order arithmetic . The language of PRA consists of: The logical axioms of PRA are the: The logical rules of PRA are modus ponens and variable substitution . The non-logical axioms are, firstly: where x ≠ y {\displaystyle x\neq y} always denotes the negation of x = y {\displaystyle x=y} so that, for example, S ( 0 ) = 0 {\displaystyle S(0)=0} is a negated proposition. Further, recursive defining equations for every primitive recursive function may be adopted as axioms as desired. For instance, the most common characterization of the primitive recursive functions is as the 0 constant and successor function closed under projection, composition and primitive recursion. So for a ( n +1)-place function f defined by primitive recursion over a n -place base function g and ( n +2)-place iteration function h there would be the defining equations: Especially: PRA replaces the axiom schema of induction for first-order arithmetic with the rule of (quantifier-free) induction: In first-order arithmetic , the only primitive recursive functions that need to be explicitly axiomatized are addition and multiplication . All other primitive recursive predicates can be defined using these two primitive recursive functions and quantification over all natural numbers . Defining primitive recursive functions in this manner is not possible in PRA, because it lacks quantifiers. It is possible to formalise PRA in such a way that it has no logical connectives at all—a sentence of PRA is just an equation between two terms. In this setting a term is a primitive recursive function of zero or more variables. Curry (1941) gave the first such system. The rule of induction in Curry's system was unusual. A later refinement was given by Goodstein (1954) . The rule of induction in Goodstein's system is: Here x is a variable, S is the successor operation, and F , G , and H are any primitive recursive functions which may have parameters other than the ones shown. The only other inference rules of Goodstein's system are substitution rules, as follows: Here A , B , and C are any terms (primitive recursive functions of zero or more variables). Finally, there are symbols for any primitive recursive functions with corresponding defining equations, as in Skolem's system above. In this way the propositional calculus can be discarded entirely. Logical operators can be expressed entirely arithmetically, for instance, the absolute value of the difference of two numbers can be defined by primitive recursion: Thus, the equations x = y and | x − y | = 0 {\displaystyle |x-y|=0} are equivalent. Therefore, the equations | x − y | + | u − v | = 0 {\displaystyle |x-y|+|u-v|=0} and | x − y | ⋅ | u − v | = 0 {\displaystyle |x-y|\cdot |u-v|=0} express the logical conjunction and disjunction , respectively, of the equations x = y and u = v . Negation can be expressed as 1 − ˙ | x − y | = 0 {\displaystyle 1{\dot {-}}|x-y|=0} .
https://en.wikipedia.org/wiki/Primitive_recursive_arithmetic
In mathematical logic , the primitive recursive functionals are a generalization of primitive recursive functions into higher type theory . They consist of a collection of functions in all pure finite types. The primitive recursive functionals are important in proof theory and constructive mathematics . They are a central part of the Dialectica interpretation of intuitionistic arithmetic developed by Kurt Gödel . In recursion theory , the primitive recursive functionals are an example of higher-type computability, as primitive recursive functions are examples of Turing computability. Every primitive recursive functional has a type, which says what kind of inputs it takes and what kind of output it produces. An object of type 0 is simply a natural number; it can also be viewed as a constant function that takes no input and returns an output in the set N of natural numbers. For any two types σ and τ, the type σ→τ represents a function that takes an input of type σ and returns an output of type τ. Thus the function f ( n ) = n +1 is of type 0→0. The types (0→0)→0 and 0→(0→0) are different; by convention, the notation 0→0→0 refers to 0→(0→0). In the jargon of type theory, objects of type 0→0 are called functions and objects that take inputs of type other than 0 are called functionals . For any two types σ and τ, the type σ×τ represents an ordered pair, the first element of which has type σ and the second element of which has type τ. For example, consider the functional A takes as inputs a function f from N to N , and a natural number n , and returns f ( n ). Then A has type (0 × (0→0))→0. This type can also be written as 0→(0→0)→0, by currying . The set of (pure) finite types is the smallest collection of types that includes 0 and is closed under the operations of × and →. A superscript is used to indicate that a variable x τ is assumed to have a certain type τ; the superscript may be omitted when the type is clear from context. The primitive recursive functionals are the smallest collection of objects of finite type such that:
https://en.wikipedia.org/wiki/Primitive_recursive_functional
Primodos was a hormone-based pregnancy test , produced by Schering AG , and used in the 1960s and 1970s that consisted of two pills that contained norethisterone (as acetate) and ethinylestradiol . [ 1 ] [ 2 ] It detected pregnancy by inducing menstruation in women who were not pregnant. The presence or absence of menstrual bleeding was then used to determine whether the user was pregnant. [ 1 ] In South Korea it was also used, "perhaps as a double dose" to induce abortions . [ 3 ] While first made available for sale in the UK in 1959, it was withdrawn from sale in the UK in 1978. [ 4 ] Primodos was produced by Schering AG , a German company taken over by Bayer AG in 2006. Another hormonal pregnancy test called Duogynon was in use in Germany during the same general time period. [ 1 ] In the 1960s, Dr. Isabel Gal conducted research at Queen Mary's Hospital for Children that showed a link between use of the drug and severe birth defects. [ 5 ] A review by the Committee on Safety of Medicines in the 1970s concluded that the product should not be used by pregnant women. [ 2 ] Litigation in the 1980s regarding these claims ended inconclusively, with proceedings being discontinued, with the court's approval. [ 1 ] [ 6 ] A review of the matter by the Medicines and Healthcare products Regulatory Agency in 2014 assessed the studies performed to date, and concluded that it found the evidence for adverse effects to be inconclusive. [ 1 ] [ 7 ] The report of an expert working group of the UK Commission on Human Medicines published in November 2017 concluded there was no "causal association" between Primodos and severe disabilities in babies. The expert group recommended that families who took a hormone pregnancy test and experienced "an adverse pregnancy outcome" should be offered genetic testing to establish whether there was a different underlying cause. [ 8 ] An independent review by Baroness Cumberlege , the Independent Medicines and Medical Devices Safety Review, reported in 2020 that "avoidable harm" resulted from the use of Primodos. It recommended that "the Government should immediately issue a fulsome apology on behalf of the healthcare system to the families affected by Primodos". [ 9 ] Dr Bill Inman , of the Committee on Safety of Medicines , who had investigated Primodos was referenced in a Schering memo stating "he has destroyed all the material on which his investigation is based, or made it unrecognizable, which makes it impossible to trace the individual cases taken into the investigation. I understood Dr. Inman that he did this to prevent individual claims from using this material." [ 9 ] Baroness Cumberlege said, in relation to Bayer, "I think they should not only apologise; they should recognise what has happened and give ex-gratia payments to these people who have suffered." [ 10 ]
https://en.wikipedia.org/wiki/Primodos
In inorganic chemistry , the Primogenic Effect describes the change in excited state manifolds for first row vs second and third row metal complexes. The effect is used to rationalize the ability or inability of certain metal complexes to function as photosensitizers , which in turn is relevant to photocatalysis . [ 1 ] Complexes of the type [M( 2,2’-bipyridine ) 3 ] 2+ are low spin for M 2+ = Fe(II), Ru(II), and Os(II). These species have similar ground state properties: they are diamagnetic and undergo reversible oxidation to the trications. As a consequence of the Primogenic Effect, the first excited state for [Fe(bipy) 3 ] 2+ is a ligand field state (LF state) with a high spin configuration . Such LF states characteristically decay to the ground state rapidly (femtoseconds). By contrast, for [Ru(bipy) 3 ] 2+ and [Os(bipy) 3 ] 2+ , the first excited state is charge-transfer in character. Bonding in this kind of excited state can be described as [M III (bipy − )(bipy) 2 ] 2+ , i.e. an oxidized metal ion bound to one bipy radical anion as well as two ordinary bipy ligands. Such charge-separated states have relatively long lifetimes of 900 (Ru) and 25 (Os) nanoseconds. Nanosecond lifetimes are sufficiently long that these excited states can participate in bimolecular reactions, i.e. they can photosensitize. One consequence of the primogenic effect is that first-row metals are usually incapable of serving as photosensitizers. This failure is unfortunate because first row metals are far cheaper than second and third row metals. The origin of the Primogenic Effect is traced to the presence (2nd and 3rd row metals) or absence (1st row metals) of a radial nodes in the wave functions of the valence d orbitals. [ 2 ] [ 3 ]
https://en.wikipedia.org/wiki/Primogenic_Effect
In mathematical physics , the primon gas or Riemann gas [ 1 ] discovered by Bernard Julia [ 2 ] is a model illustrating correspondences between number theory and methods in quantum field theory , statistical mechanics and dynamical systems such as the Lee-Yang theorem . It is a quantum field theory of a set of non-interacting particles, the primons ; it is called a gas or a free model because the particles are non-interacting. The idea of the primon gas was independently discovered by Donald Spector. [ 3 ] Later works by Ioannis Bakas and Mark Bowick , [ 4 ] and Spector [ 5 ] explored the connection of such systems to string theory . Consider a Hilbert space H with an orthonormal basis of states | p ⟩ {\displaystyle |p\rangle } labelled by the prime numbers p . Second quantization gives a new Hilbert space K , the bosonic Fock space on H , where states describe collections of primes - which we can call primons if we think of them as analogous to particles in quantum field theory. This Fock space has an orthonormal basis given by finite multisets of primes. In other words, to specify one of these basis elements we can list the number k p = 0 , 1 , 2 , … {\displaystyle k_{p}=0,1,2,\dots } of primons for each prime p {\displaystyle p} : where the total ∑ p k p {\displaystyle \sum _{p}k_{p}} is finite. Since any positive natural number n {\displaystyle n} has a unique factorization into primes: we can also denote the basis elements of the Fock space as simply | n ⟩ {\displaystyle |n\rangle } where n = 1 , 2 , 3 , … . {\displaystyle n=1,2,3,\dots .} In short, the Fock space for primons has an orthonormal basis given by the positive natural numbers, but we think of each such number n {\displaystyle n} as a collection of primons: its prime factors, counted with multiplicity. Given the state x n = n {\displaystyle x_{n}=n} , we may use the Koopman operator [ 6 ] Φ {\displaystyle \Phi } to lift dynamics from the space of states to the space of observables: where log {\displaystyle {\textbf {log}}} is an algorithm for integer factorisation, analogous to the discrete logarithm, and F {\displaystyle F} is the successor function. Thus, we have: A precise motivation for defining the Koopman operator Φ {\displaystyle \Phi } is that it represents a global linearisation of F {\displaystyle F} , which views linear combinations of eigenstates as integer partitions. In fact, the reader may easily check that the successor function is not a linear function: Hence, Φ {\displaystyle \Phi } is canonical. If we take a simple quantum Hamiltonian H to have eigenvalues proportional to log p , that is, with for some positive constant E {\displaystyle E} , we are naturally led to Let's suppose we would like to know the average time, suitably-normalised, that the Riemann gas spends in a particular subspace. How might this frequency be related to the dimension of this subspace? If we characterize distinct linear subspaces as Erdős-Kac data which have the form of sparse binary vectors, using the Erdős-Kac theorem we may actually demonstrate that this frequency depends upon nothing more than the dimension of the subspace. In fact, if ω ( n ) {\displaystyle \omega (n)} counts the number of unique prime divisors of n ∈ N {\displaystyle n\in \mathbb {N} } then the Erdős-Kac law tells us that for large n {\displaystyle n} : has the standard normal distribution. What is even more remarkable is that although the Erdős-Kac theorem has the form of a statistical observation, it could not have been discovered using statistical methods. [ 7 ] Indeed, for X ∼ U ( [ 1 , N ] ) {\displaystyle X\sim U([1,N])} the normal order of ω ( X ) {\displaystyle \omega (X)} only begins to emerge for N ≥ 10 100 {\displaystyle N\geq 10^{100}} . The partition function Z of the primon gas is given by the Riemann zeta function : with s = E / k B T where k B is the Boltzmann constant and T is the absolute temperature . The divergence of the zeta function at s = 1 corresponds to the divergence of the partition function at a Hagedorn temperature of T H = E / k B . The above second-quantized model takes the particles to be bosons . If the particles are taken to be fermions , then the Pauli exclusion principle prohibits multi-particle states which include squares of primes. By the spin–statistics theorem , field states with an even number of particles are bosons, while those with an odd number of particles are fermions. The fermion operator (−1) F has a very concrete realization in this model as the Möbius function μ ( n ) {\displaystyle \mu (n)} , in that the Möbius function is positive for bosons, negative for fermions, and zero on exclusion-principle-prohibited states. The connections between number theory and quantum field theory can be somewhat further extended into connections between topological field theory and K-theory , where, corresponding to the example above, the spectrum of a ring takes the role of the spectrum of energy eigenvalues, the prime ideals take the role of the prime numbers, the group representations take the role of integers, group characters taking the place the Dirichlet characters , and so on.
https://en.wikipedia.org/wiki/Primon_gas
Primordial germ cell (PGC) migration is the process of distribution of primordial germ cells throughout the embryo during embryogenesis . Primordial germ cells are among the first lineages that are established in development [ 1 ] and they are the precursors for gametes . [ 2 ] It is thought that the process of primordial germ cell migration itself has been conserved rather than the specific mechanisms within it, as chemoattraction and repulsion seem to have been borrowed from blood cells , neurones , and the mesoderm . [ 1 ] For most organisms, PGC migration starts in the posterior (back end) of the embryo . This process is in most cases distinct from PGC proliferation , with the exception of mammals in which both processes occur at the same time. In most mammals, specification occurs first, followed by migration , and then the proliferation process begins in the gonads . [ 1 ] PGCs interact with a wide range of cell types as they move from the epiblast to the gonads. [ 1 ] The PGCs move passively (without the need for energy) with underlying somatic cells , cross epithelial barriers, and respond to cues from their environment during active migration. [ 3 ] An epithelium must be crossed in many species during germ cell migration, and changes in adhesion are observed in PGCs during their exit from the endoderm and during the initiation of active migration. [ 3 ] Active migration takes place as PGCs move towards the developing somatic gonad. [ 3 ] Effective migration requires cell elongation and polarity . [ 1 ] Environmental guidance cues are required for the PGCs to initiate and sustain their mobility. [ 3 ] Specific molecular pathways are activated to give PGCs motility. [ 2 ] One of the functions of PGC migration is to allow them to reach the gonad, where they will go on to form sperm or oocytes . [ 1 ] However, an additional function that this migration is thought to serve is as quality control for PGCs. [ 1 ] Migration occurs early in gametogenesis, but PGCs could contain defects that could have a negative impact on later development - genetic mutations may be acquired because of proliferation in the blastocyst. [ 1 ] This is done via a negative selection process – PGCs that are unable to complete migration are removed and those that are able to correctly respond to migration cues are preferred. [ 1 ] PGCs that are able to migrate the fastest and reach the gonad are more likely to colonise it and give rise to future gametes. [ 2 ] The PGCs that go off route or don’t reach the gonad undergo programmed cell death ( apoptosis ). It is thought that every step after specification may function as a selective mechanism to ensure germ cells are of the highest quality. [ 1 ] The selective mechanisms may also be important for removing PGCs with abnormal epigenetic marks and in doing so preserving the germline . [ 1 ] In Drosophila , the whole migration process has been estimated to take 10 hours. [ 4 ] It begins with the formation of PGCs; from dividing nuclei becoming encircled by cell membranes , occurring at the posterior pole of the embryo . [ 5 ] Division of the nuclei stops once they have a cell membrane. [ 3 ] PGCs’ transcription process is also thought to be actively subdued once formed. [ 3 ] In Drosophila, PGC migration begins with passive movement along the dorsal side of the embryo, during gastrulation . [ 4 ] This is followed by more passive movement, due to the invagination of the posterior midgut primordium, which leads to the PGCs in the centre of the embryo , surrounded by epithelial cells that have been folded back on themselves. [ 4 ] There is then a split into two groups, left and right respectively, as they actively migrate laterally across the epithelium to exit the gut, facilitated by fibroblast growth factor (FGF) signalling and a repulsion-based mechanism using enzymes encoded by the Wunen gene . [ 3 ] [ 4 ] [ 6 ] This is followed by active movement dorsally along the basal side of the embryo. [ 4 ] Through directional migration - which requires multiple genes to work, one being the Columbus (clb) gene, which codes for Drosophila HMG CoA reductase - the germ cells move towards the somatic gonadal precursor cells and associate with them. [ 3 ] [ 6 ] These two associated cell types then migrate together anteriorly , until they coalesce into the embryonic gonad at the future site of the mature gonad. [ 4 ] In vertebrate development, the location where primordial germ cells are specified and the subsequent migratory paths that they take differs among species. [ 1 ] Chicken primordial germ cells are initially specified in the area pellucida (a one-cell thick layer of epiblast lying above the sub-germinal space). [ 1 ] [ 7 ] Following the formation of the primitive streak, the germ cells are carried to the germinal crescent region. [ 1 ] Unlike most model organisms where germ cell migration is predominantly via the gut epithelium, chicken PGCs migrate through the embryonic vascular epithelium. [ 3 ] Once they have exited the capillary vessels, the final stage of migration is along the dorsal mesentery to the developing gonad. [ 1 ] In mice , PGCs are specified in the proximal epiblast and subsequently migrate through the primitive streak towards the endoderm . [ 3 ] The PGCs then embed themselves within the epithelium of the hind-gut and from there will migrate towards the mesoderm via the dorsal mesentery. [ 1 ] [ 3 ] There is then bilateral migration of the PGCs to the developing gonadal ridges which follows a pattern very similar to that found in Drosophila. [ 1 ] Zebrafish PGCs are specified at four different locations within the early embryo via inheritance of germ plasm (a mixture of RNA and protein often associated with mitochondria). [ 8 ] [ 3 ] Germ cells from these four locations will then migrate dorsally after down-regulation of the rgs14a G-protein which regulates E-cadherin . [ 1 ] Down-regulation will result in reduced cell-cell adhesion which allows the germ cells to separate and begin the migration process. Migration of the PGCs then continues towards the developing somites 1-3. [ 9 ] This movement is coordinated by the expression of the chemo-attractant SDF1A (stromal derived factor 1a). [ 3 ] The final migration towards the developing gonad occurs 13 hours-post-fertilisation after which point the germ cells coalesce with the somatic gonadal precursor cells. [ 3 ] The entire process takes around 24 hours. [ 3 ] PGCs are described as the dedicated cells in early embryonic development , responsible for reaching the developing gonad. [ 3 ] [ 9 ] During their migration however, heterogeneity of cellular behaviour is observed due to change in cellular morphology from the time of specification to colonization. [ 3 ] By the end of PGC migration, around 5% of migratory cells remain outside the gonad and later undergo apoptosis. [ 10 ] The apoptotic route during the migratory period is occurring via an intrinsic pathway; nonetheless, the elimination of PGCs can be unsuccessful and result in tumour formation known as teratomas , derivatives of the three germ layers . [ 1 ] [ 11 ] Mutations in Pten, CyclinD1, Dmrt1 and Dnd1 oncogenes in mice resulted in testicular teratomas, and variants are related with the same tumours in humans. [ 1 ] Tumour formation ( neoplasm ) from foetal gonocytes suggests that they are incapable of maintaining proliferative arrest and resistance to further differentiation. [ 1 ] Even so, the origin of these teratomas could be distinct from the PGCs failing in migration. [ 12 ] Extragonadal germ cell tumours (GCTs) evolve due to a lesion along the midline of the body, prior to the migratory PGCs movement through the hindgut and the medial mesentery to the gonads. [ 3 ] Therefore, human GCTs originate from early embryo stem cells and the germ line , and unlike most tumours they seldom have somatic mutations , but instead are driven by unsuccessful control of their developmental potential, resulting in their reprogramming. [ 3 ]
https://en.wikipedia.org/wiki/Primordial_germ_cell_migration
In geochemistry , geophysics and nuclear physics , primordial nuclides , also known as primordial isotopes , are nuclides found on Earth that have existed in their current form since before Earth was formed . Primordial nuclides were present in the interstellar medium from which the Solar System was formed, and were formed in, or after, the Big Bang , by nucleosynthesis in stars and supernovae followed by mass ejection, by cosmic ray spallation , and potentially from other processes. They are the stable nuclides plus the long-lived fraction of radionuclides surviving in the primordial solar nebula through planet accretion until the present; 286 such nuclides are known. All of the known 251 stable nuclides , plus another 35 nuclides that have half-lives long enough to have survived from the formation of the Earth, occur as primordial nuclides. These 35 primordial radionuclides represent isotopes of 28 separate elements . Cadmium , tellurium , xenon , neodymium , samarium , osmium , and uranium each have two primordial radioisotopes ( 113 Cd , 116 Cd ; 128 Te , 130 Te ; 124 Xe , 136 Xe ; 144 Nd , 150 Nd ; 147 Sm , 148 Sm ; 184 Os , 186 Os ; and 235 U , 238 U ). Because the age of the Earth is 4.58 × 10 9 years (4.58 billion years), the half-life of the given nuclides must be greater than about 10 8 years (100 million years) for practical considerations. For example, for a nuclide with half-life 6 × 10 7 years (60 million years), this means 77 half-lives have elapsed, meaning that for each mole ( 6.02 × 10 23 atoms ) of that nuclide being present at the formation of Earth, only 4 atoms remain today. The seven shortest-lived primordial nuclides (i.e., the nuclides with the shortest half-lives) to have been experimentally verified are 87 Rb ( 4.92 × 10 10 years ), 187 Re ( 4.12 × 10 10 years ), 176 Lu ( 3.70 × 10 10 years ), 232 Th ( 1.41 × 10 10 years ), 238 U ( 4.47 × 10 9 years ), 40 K ( 1.25 × 10 9 years ), and 235 U ( 7.04 × 10 8 years ). These are the seven nuclides with half-lives comparable to, or somewhat less than, the estimated age of the universe . ( 87 Rb, 187 Re, 176 Lu, and 232 Th have half-lives somewhat longer than the age of the universe.) For a complete list of the 35 known primordial radionuclides, including the next 28 with half-lives much longer than the age of the universe, see the complete list below. For practical purposes, nuclides with half-lives much longer than the age of the universe may be treated as if they were stable. 87 Rb, 187 Re, 176 Lu, 232 Th, and 238 U have half-lives long enough that their decay is limited over geological time scales; 40 K and 235 U have shorter half-lives and are hence severely depleted, but are still long-lived enough to persist significantly in nature. The longest-lived isotope not proven to be primordial [ 1 ] is 146 Sm , which has a half-life of 9.20 × 10 7 years , followed by 244 Pu ( 8.13 × 10 7 years ) and 92 Nb ( 3.47 × 10 7 years ). 244 Pu was reported to exist in nature as a primordial nuclide in 1971, [ 2 ] but this detection could not be confirmed by further studies in 2012 and 2022. [ 3 ] [ 4 ] Taking into account that all these nuclides must exist for at least 4.58 × 10 9 years , 146 Sm must survive 50 half-lives (and hence be reduced by 2 50 ≈ 1 × 10 15 ), 244 Pu must survive 57 (and be reduced by a factor of 2 57 ≈ 1 × 10 17 ), and 92 Nb must survive 130 (and be reduced by 2 130 ≈ 1 × 10 39 ). Mathematically, considering the likely initial abundances of these nuclides, primordial 146 Sm and 244 Pu should persist somewhere within the Earth today, even if they are not identifiable in the relatively minor portion of the Earth's crust available to human assays, while 92 Nb and all shorter-lived nuclides should not. Nuclides such as 92 Nb that were present in the primordial solar nebula but have long since decayed away completely are termed extinct radionuclides if they have no other means of being regenerated. [ 5 ] As for 244 Pu, calculations suggest that as of 2022, sensitivity limits were about one order of magnitude away from detecting it as a primordial nuclide. [ 4 ] Because primordial chemical elements often consist of more than one primordial isotope, there are only 83 distinct primordial chemical elements . Of these, 80 have at least one observationally stable isotope and three additional primordial elements have only radioactive isotopes ( bismuth , thorium , and uranium). Some unstable isotopes which occur naturally (such as 14 C , 3 H , and 239 Pu ) are not primordial, as they must be constantly regenerated. This occurs by cosmic radiation (in the case of cosmogenic nuclides such as 14 C and 3 H ), or (rarely) by such processes as geonuclear transmutation ( neutron capture of uranium in the case of 237 Np and 239 Pu ). Other examples of common naturally occurring but non-primordial nuclides are isotopes of radon , polonium , and radium , which are all radiogenic nuclide daughters of uranium decay and are found in uranium ores. The stable argon isotope 40 Ar is actually more common as a radiogenic nuclide than as a primordial nuclide, forming almost 1% of the Earth's atmosphere , which is regenerated by the beta decay of the extremely long-lived radioactive primordial isotope 40 K , whose half-life is on the order of a billion years and thus has been generating argon since early in the Earth's existence. (Primordial argon was dominated by the alpha process nuclide 36 Ar, which is significantly rarer than 40 Ar on Earth.) A similar radiogenic series is derived from the long-lived radioactive primordial nuclide 232 Th . These nuclides are described as geogenic , meaning that they are decay or fission products of uranium or other actinides in subsurface rocks. [ 6 ] All such nuclides have shorter half-lives than their parent radioactive primordial nuclides. Some other geogenic nuclides do not occur in the decay chains of 232 Th, 235 U, or 238 U but can still fleetingly occur naturally as products of the spontaneous fission of one of these three long-lived nuclides, such as 126 Sn , which makes up about 10 −14 of all natural tin . [ 7 ] Another, 99 Tc , has also been detected. [ 8 ] There are five other long-lived fission products known. A primordial element is a chemical element with at least one primordial nuclide. There are 251 stable primordial nuclides and 35 radioactive primordial nuclides, but only 80 primordial stable elements —hydrogen through lead, atomic numbers 1 to 82, except for technetium (43) and promethium (61)—and three radioactive primordial elements —bismuth (83), thorium (90), and uranium (92). If plutonium (94) turns out to be primordial (specifically, the long-lived isotope 244 Pu), then it would be a fourth radioactive primordial, though practically speaking it would still be more convenient to produce synthetically. Bismuth's half-life is so long that it is often classed with the 80 stable elements instead, since its radioactivity is not a cause for concern. The number of elements is smaller than the number of nuclides, because many of the primordial elements are represented by multiple isotopes . See chemical element for more information. As noted, these number about 251. For a list, see the article list of elements by stability of isotopes . For a complete list noting which of the "stable" 251 nuclides may be in some respect unstable, see list of nuclides and stable nuclide . These questions do not impact the question of whether a nuclide is primordial, since all "nearly stable" nuclides, with half-lives longer than the age of the universe, are also primordial. Though it is estimated that about 35 primordial nuclides are radioactive (list below), it becomes very hard to determine the exact total number of radioactive primordials, because the total number of stable nuclides is uncertain. There are many extremely long-lived nuclides whose half-lives are still unknown; in fact, all nuclides heavier than dysprosium-164 are theoretically radioactive. For example, it is predicted theoretically that all isotopes of tungsten , including those indicated by even the most modern empirical methods to be stable, must be radioactive and can alpha decay , but as of 2013 [update] this could only be measured experimentally for 180 W . [ 9 ] Likewise, all four primordial isotopes of lead are expected to decay to mercury , but the predicted half-lives are so long (some exceeding 10 100 years) that such decays could hardly be observed in the near future. Nevertheless, the number of nuclides with half-lives so long that they cannot be measured with present instruments—and are considered from this viewpoint to be stable nuclides —is limited. Even when a "stable" nuclide is found to be radioactive, it merely moves from the stable to the unstable list of primordials, and the total number of primordial nuclides remains unchanged. For practical purposes, these nuclides may be considered stable for all purposes outside specialized research. [ citation needed ] These 35 primordial radionuclides are isotopes of 28 elements (cadmium, neodymium, osmium, samarium, tellurium, uranium, and xenon each have two primordial radioisotopes). These nuclides are listed in order of decreasing stability. Many of them are so nearly stable that they compete for abundance with stable isotopes of their respective elements. For three elements ( indium , tellurium , and rhenium ) a very long-lived radioactive primordial nuclide is more abundant than a stable nuclide. The longest-lived radionuclide known, 128 Te, has a half-life of 2.2 × 10 24 years : 1.6 × 10 14 times the age of the Universe . Only four of these 35 nuclides have half-lives shorter than, or equal to, the age of the universe. Most of the other 30 have half-lives much longer. The shortest-lived primordial, 235 U, has a half-life of 703.8 million years, about 1/6 the age of the Earth and Solar System . Many of these nuclides decay by double beta decay , though some like 209 Bi decay by other means such as alpha decay . At the end of the list, are two more nuclides: 146 Sm and 244 Pu. They have not been confirmed as primordial, but their half-lives are long enough that minute quantities should persist today.
https://en.wikipedia.org/wiki/Primordial_nuclide
Primordial soup , also known as prebiotic soup and Haldane soup , is the hypothetical set of conditions present on the Earth around 3.7 to 4.0 billion years ago. It is an aspect of the heterotrophic theory (also known as the Oparin–Haldane hypothesis ) concerning the origin of life , first proposed by Alexander Oparin in 1924, and J. B. S. Haldane in 1929. [ 1 ] [ 2 ] As formulated by Oparin, in the primitive Earth's surface layers, carbon , hydrogen , water vapour, and ammonia reacted to form the first organic compounds . The concept of a primordial soup gained credence in 1953 when the " Miller–Urey experiment " used a highly reduced mixture of gases— methane , ammonia and hydrogen—to form basic organic monomers, such as amino acids . [ 3 ] The notion that living beings originated from inanimate materials comes from the Ancient Greeks—the theory known as spontaneous generation . Aristotle in the 4th century BCE gave a proper explanation, writing: So with animals, some spring from parent animals according to their kind, whilst others grow spontaneously and not from kindred stock; and of these instances of spontaneous generation some come from putrefying earth or vegetable matter, as is the case with a number of insects, while others are spontaneously generated in the inside of animals out of the secretions of their several organs. [ 4 ] Aristotle also states that it is not only that animals originate from other similar animals, but also that living things do arise and always have arisen from lifeless matter. His theory remained the dominant idea on origin of life (outside that of deity as a causal agent) from the ancient philosophers to the Renaissance thinkers in various forms. [ 5 ] With the birth of modern science, experimental refutations emerged. Italian physician Francesco Redi demonstrated in 1668 that maggots developed from rotten meat only in a jar where flies could enter, but not in a closed-lid jar. He concluded that: omne vivum ex vivo (All life comes from life). [ 6 ] The experiment of French chemist Louis Pasteur in 1859 is regarded as the death blow to spontaneous generation. He experimentally showed that organisms (microbes) can not grow in sterilised water, unless it is exposed to air. The experiment won him the Alhumbert Prize in 1862 from the French Academy of Sciences , and he concluded: "Never will the doctrine of spontaneous generation recover from the mortal blow of this simple experiment." [ 7 ] Evolutionary biologists believed that a kind of spontaneous generation, but different from the simple Aristotelian doctrine, must have worked for the emergence of life. French biologist Jean-Baptiste de Lamarck had speculated that the first life form started from non-living materials. "Nature, by means of heat, light, electricity and moisture", he wrote in 1809 in Philosophie Zoologique ( The Philosophy of Zoology ), "forms direct or spontaneous generation at that extremity of each kingdom of living bodies, where the simplest of these bodies are found". [ 8 ] When English naturalist Charles Darwin introduced the theory of natural selection in his 1859 book On the Origin of Species , his supporters, such as the German zoologist Ernst Haeckel , criticised him for not using his theory to explain the origin of life. Haeckel wrote in 1862: "The chief defect of the Darwinian theory is that it throws no light on the origin of the primitive organism—probably a simple cell—from which all the others have descended. When Darwin assumes a special creative act for this first species, he is not consistent, and, I think, not quite sincere." [ 9 ] Although Darwin did not speak explicitly about the origin of life in On the Origin of Species , he did mention a "warm little pond" in a letter to Joseph Dalton Hooker dated February 1, 1871: [ 10 ] It is often said that all the conditions for the first production of a living being are now present, which could ever have been present. But if (and oh what a big if) we could conceive in some warm little pond with all sort of ammonia and phosphoric salts,—light, heat, electricity present, that a protein compound was chemically formed, ready to undergo still more complex changes, at the present such matter would be instantly devoured, or absorbed, which would not have been the case before living creatures were formed [...]. A coherent scientific argument was introduced by Soviet biochemist Alexander Oparin in 1924. According to Oparin, in the primitive Earth's surface, carbon , hydrogen , water vapour, and ammonia reacted to form the first organic compounds. Unbeknownst to Oparin, whose writing was circulated only in Russian, an English scientist J. B. S. Haldane independently arrived at a similar conclusion in 1929. [ 11 ] [ 12 ] It was Haldane who first used the term "soup" to describe the accumulation of organic material and water in the primitive Earth [ 2 ] [ 8 ] When ultra-violet light acts on a mixture of water, carbon dioxide , and ammonia, a vast variety of organic substances are made, including sugars and apparently some of the materials from which proteins are built up. [...] before the origin of life they must have accumulated till the primitive oceans reached the consistency of hot dilute soup. According to the theory, organic compounds essential for life forms were synthesized in the primitive Earth under prebiotic conditions. The mixture of inorganic and organic compounds with water on the primitive Earth became the prebiotic or primordial soup. There, life originated and the first forms of life were able to use the organic molecules to survive and reproduce. Today the theory is variously known as the heterotrophic theory, heterotrophic origin of life theory, or the Oparin-Haldane hypothesis. [ 13 ] Biochemist Robert Shapiro has summarized the basic points of the theory in its "mature form" as follows: [ 14 ] Alexander Oparin first postulated his theory in Russia in 1924 in a small pamphlet titled Proiskhozhdenie Zhizny ( The Origin of Life ). [ 15 ] According to Oparin, the primitive Earth's surface had a thick red-hot liquid, composed of heavy elements such as carbon (in the form of iron carbide ). This nucleus was surrounded by the lightest elements, i.e. gases, such as hydrogen. In the presence of water vapour, carbides reacted with hydrogen to form hydrocarbons . Such hydrocarbons were the first organic molecules. These further combined with oxygen and ammonia to produce hydroxy- and amino-derivatives, such as carbohydrates and proteins. These molecules accumulated on the ocean's surface, becoming gel-like substances and growing in size. They gave rise to primitive organisms (cells), which he called coacervates . [ 8 ] In his original theory, Oparin considered oxygen as one of the primordial gases; thus the primordial atmosphere was an oxidising one. However, when he elaborated his theory in 1936 (in a book by the same title, and translated into English in 1938), [ 16 ] he modified the chemical composition of the primordial environment as strictly reducing, consisting of methane, ammonia, free hydrogen and water vapour—excluding oxygen. [ 13 ] In his 1936 work, impregnated by a Darwinian thought that involved a slow and gradual evolution from the simple to the complex, Oparin proposed a heterotrophic origin, result of a long process of chemical and pre-biological evolution, where the first forms of life should have been microorganisms dependent on the molecules and organic substances present in their external environment. [ 8 ] That external environment was the primordial soup. The idea of a heterotrophic origin was based, in part, on the universality of fermentative reactions, which, according to Oparin, should have first appeared in evolution due to its simplicity. This was opposed to the idea, widely accepted at that time, that the first organisms emerged endowed with an autotrophic metabolism, which included photosynthetic pigments , enzymes and the ability to synthesize organic compounds from CO 2 and H 2 O; for Oparin it was impossible to reconcile the original photosynthetic organisms with the ideas of Darwinian evolution. From the detailed analysis of the geochemical and astronomical data known at that date, Oparin also proposed a primitive atmosphere devoid of O 2 and composed of CH 4 , NH 3 and H 2 O; under these conditions it was pointed out that the origin of life had been preceded by a period of abiotic synthesis and subsequent accumulation of various organic compounds in the seas of primitive Earth. [ 11 ] This accumulation resulted in the formation of a primordial broth containing a wide variety of molecules. There, according to Oparin, a particular type of colloid, the coacervates, were formed due to the conglomeration of organic molecules and other polymers with positive and negative charges. Oparin suggested that the first living beings had been preceded by pre-cellular structures similar to those coacervates, whose gradual evolution gave rise to the appearance of the first organisms. [ 11 ] Like the coacervates, several of Oparin's original ideas have been reformulated and replaced; this includes, for example, the reducing character of the atmosphere on primitive Earth, the coacervates as a pre-cellular model and the primitive nature of glycolysis. In the same way, we now understand that the gradual processes are not necessarily slow, and we even know, thanks to the fossil record, that the origin and early evolution of life occurred in short geologic time lapses. However, the general approach of Oparin's theory had great implications for biology, since his work achieved the transformation of the study of the origin of life from a purely speculative field to a structured and broad research program. [ 8 ] Thus, since the second half of the twentieth century, Oparin's theory of the origin and early evolution of life has undergone a restructuring that accommodates the experimental findings of molecular biology, as well as the theoretical contributions of evolutionary biology. A point of convergence between these two branches of biology and that has been perfectly incorporated into the heterotrophic origin theory is found in the RNA world hypothesis . This links to the Soda Ocean Hypothesis, characterizing the primitive ocean with a higher carbonate mineral supersaturation. [ 17 ] Soda lakes are considered as environments that conserve and/or mimic ancient life conditions [ 18 ] and as "a recreated model of late Precambrian ocean chemistry" [ 19 ] — that is, the "soda lake" environment that prepared the great explosion of life during the Cambrian . J.B.S. Haldane independently postulated his primordial soup theory in 1929 in an eight-page article "The origin of life" in The Rationalist Annual . [ 8 ] According to Haldane the primitive Earth's atmosphere was essentially reducing, with little or no oxygen. Ultraviolet rays from the Sun induced reactions on a mixture of water, carbon dioxide, and ammonia. Organic substances such as sugars and protein components ( amino acids ) were synthesised. These molecules "accumulated till the primitive oceans reached the consistency of hot dilute soup." The first reproducing things were created from this soup. [ 20 ] As to the priority over the theory, Haldane accepted that Oparin came first, saying, "I have very little doubt that Professor Oparin has the priority over me." [ 21 ] Though Oparin and Haldane presented a convincing theory for the origin of life, there are some natural phenomena that their work fails to explain. It is understood, based off of the heterotrophic theory, that at the time life was generated, the atmosphere was strongly reducing . [ 22 ] [ 23 ] However, evidence suggests that the atmosphere was likely not nearly reducing enough to support this. [ 24 ] The availability of highly reduced compounds such as NH 3 and CH 4 was limited, there was likely not enough of them to support heterotrophic redox and life. [ 25 ] Another complication with the heterotrophic theory exists due to the selective chirality of biological molecules. Chirality refers to the lack symmetry in biological molecules and which orientation they prefer. For instance, amino acids exist predominantly in the L conformation and sugars prefer the D conformation. Biological molecules are highly specific in which enantiomer they prefer. [ 26 ] Because of this unique fact, scientists feel that the correct theory of the origin of life should explain this selective chirality. [ 25 ] The heterotrophic theory fails to do this. [ 27 ] The heterotrophic theory is highly specific and includes details about the conditions of early metabolism. [ 28 ] However, in doing this, it is unable to provide a grounds for evolution and the distinction between bacteria, archaea, and eucarya. How did organisms that utilize the same type of metabolism become so highly differentiated? [ 29 ] This is another unanswered question we are left with if the heterotrophic theory is true. Finally, as the name implies, the heterotrophic theory indicates that early life on earth consisted entirely of heterotrophs. A condition of heterotrophic metabolism, is that the energetic substrate is not produced by the same organism that consumes it. Because of this, heterotrophy works well in tandem with other species that replenish the depleted substrate. [ 30 ] However, if all early life was heterotrophic, there would be no way to regenerate the metabolite needed for energy production. [ 31 ] The heterotrophic theory fails to explain this key fallacy. Thought the heterotrophic theory is interesting, and could describe elements of early life on earth, it is likely not the whole picture. It must be built upon and developed further to fully explain the niches of early metabolism. One of the most important pieces of experimental support for the "soup" theory came in 1953. A graduate student, Stanley Miller , and his professor, Harold Urey , performed an experiment that demonstrated how organic molecules could have spontaneously formed from inorganic precursors, under conditions like those posited by the Oparin–Haldane hypothesis. The now-famous " Miller–Urey experiment " used a highly reduced mixture of gases—methane, ammonia and hydrogen—to form basic organic monomers, such as amino acids . [ 3 ] This provided direct experimental support for the second point of the "soup" theory, and it is one of the remaining two points of the theory that much of the debate now centers. Apart from the Miller–Urey experiment, the next most important step in research on prebiotic organic synthesis was the demonstration by Joan Oró that the nucleic acid purine base, adenine, was formed by heating aqueous ammonium cyanide solutions. [ 32 ] In support of abiogenesis in eutectic ice, more recent work demonstrated the formation of s- triazines (alternative nucleobases ), pyrimidines (including cytosine and uracil), and adenine from urea solutions subjected to freeze-thaw cycles under a reductive atmosphere (with spark discharges as an energy source). [ 33 ] The evolution of living systems by natural selection that presumably emerged in the primordial soup, and certain nonliving physical order-generating systems, were proposed to obey a common fundamental principle that was termed the Darwinian dynamic. [ 34 ] The basic conditions necessary for natural selection to operate as conceived by Darwin are variation of type, heritability and competition for limited resources. These conditions can apply to short replicating RNA molecules that were presumably present in the primordial soup, and such RNA molecules have been proposed to have preceded the emergence of more complex life (see RNA world ). [ 35 ] The basic processes of natural selection applicable to short replicating RNA molecules were shown to have the same form and content as equations that govern the emergence of macroscopic order in nonliving systems maintained far from thermodynamic equilibrium. [ 34 ] However, currently, the extent to which Darwinian principles apply to the presumed prebiotic and protocellular phases of life, as well as to non-biological systems, remains an unresolved issue in efforts to understand the emergence of life . [ 36 ] [ 37 ]
https://en.wikipedia.org/wiki/Primordial_soup
In molecular biology, a primosome is a protein complex responsible for creating RNA primers on single stranded DNA during DNA replication . The primosome consists of seven proteins: DnaG primase , DnaB helicase , DnaC helicase assistant, DnaT , PriA , Pri B , and PriC . At each replication fork , the primosome is utilized once on the leading strand of DNA and repeatedly, initiating each Okazaki fragment , on the lagging DNA strand. Initially the complex formed by PriA, PriB, and PriC binds to DNA. Then the DnaB-DnaC helicase complex attaches along with DnaT. This structure is referred to as the pre-primosome. Finally, DnaG will bind to the pre-primosome forming a complete primosome. The primosome attaches 1-10 RNA nucleotides to the single stranded DNA creating a DNA-RNA hybrid. This sequence of RNA is used as a primer to initiate DNA polymerase III . The RNA bases are ultimately replaced with DNA bases by RNase H nuclease (eukaryotes) or DNA polymerase I nuclease (prokaryotes). DNA Ligase then acts to join the two ends together. Assembly of the Escherichia coli primosome requires six proteins, PriA, PriB, PriC, DnaB, DnaC, and DnaT, acting at a primosome assembly site (pas) on an SSBcoated single-stranded (8s) DNA. Assembly is initiated by interactions of PriA and PriB with ssDNA and the pas. PriC, DnaB, DnaC, and DnaT then act on the PriAPriB- DNA complex to yield the primosome. [ 1 ] Primosomes are nucleoproteins assemblies that activate DNA replication forks. Their primary role is to recruit the replicative helicase onto single-stranded DNA. The "replication restart" primosome, defined in Escherichia coli , is involved in the reactivation of arrested replication forks. Binding of the PriA protein to forked DNA triggers its assembly. PriA is conserved in bacteria, but its primosomal partners are not. In Bacillus subtilis, genetic analysis has revealed three primosomal proteins, DnaB , DnaD , and DnaI , that have no obvious homologues in E. coli . They are involved in primosome function both at arrested replication forks and at the chromosomal origin. Our biochemical analysis of the DnaB and DnaD proteins unravels their role in primosome assembly. They are both multimeric and bind individually to DNA. Furthermore, DnaD stimulates DnaB binding activities. DnaD alone and the DnaD/DnaB pair interact specifically with PriA of B. subtilis on several DNA substrates. This suggests that the nucleoprotein assembly is sequential in the PriA, DnaD, DnaB order. The preferred DNA substrate mimics an arrested DNA replication fork with unreplicated lagging strand, structurally identical to a product of recombinational repair of a stalled replication fork. This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Primosome
Prim–Read theory , or Prim–Read defense , was an important development in game theory that led to radical changes in the United States ' views on the value of anti-ballistic missile (ABM) systems. The theory assigns a certain cost to deploying defensive missiles and suggests a way to maximize their value in terms of the amount of damage they could reduce. By comparing the cost of various deployments, one can determine the relative amount of money needed to provide a defense against a certain number of ICBMs . The theory was first introduced in the late 1950s and might have been lost to history had it not been picked up during the debate on the Nike-X ABM. Nike-X called for the deployment of a heavy defensive system around major US cities with the intent of seriously blunting the effect of any Soviet strike. A number of operations researchers , notably US Air Force General Glenn Kent, used Prim–Read to conclusively demonstrate that the cost of reducing damage back to a given level was always more than the cost of causing additional damage by building more ICBMs. The outcome of these studies suggested that any US deployment of an ABM system would result in the USSR building a small number of additional missiles to defeat it. Assuming the Soviets would come to the same conclusion, Robert McNamara became highly critical of any large-scale ABM system, and began efforts that would ultimately lead to the ABM treaty in 1972. The underlying concept became known as the cost-exchange ratio . The US Army began studying the anti-ballistic missile in a serious fashion in 1955. Working with Bell Labs , who had delivered the successful Nike and Nike B surface-to-air missiles (SAM), they began by considering what was essentially a direct update of the Nike concepts to the ABM mission. Bell returned a report suggesting that minor upgrades to the Hercules missile, along with much more powerful radars and computers, would do the trick. This was initially known as Nike II, but renamed Nike Zeus in 1956. [ 1 ] Early in the Zeus effort the US Air Force attempted to derail the project by pointing out that if Zeus cost the same as an ICBM, and the Soviets were building them as quickly as Nikita Khrushchev claimed, then they could simply build a few more to "soak up" any Zeus' the Army deployed. But in fact, it seemed the ICBMs were actually cheaper than Zeus, perhaps significantly, which meant the US would lose the resulting arms race . This basic concept became known as the cost exchange ratio . [ 2 ] President Eisenhower 's Secretary of Defense Neil McElroy identified the Air Force complaints as an example of sour grapes , having lost funding for their own ABM efforts, Project Wizard , in favor of Zeus. But the math appeared to be correct, so he asked for a second opinion from the President's Science Advisory Committee (PSAC). They largely agreed with the Air Force's take, and then added several additional concerns of their own. [ 2 ] By the late 1950s, several new problems became evident. One was that the newly discovered nuclear blackout effect would allow an enemy to blanket an area hundreds of miles wide with a radar-opaque layer for the cost of one warhead. This would render Zeus blind to anything above the layer; following warheads would not become visible until too close to the base to attack. Another issue was the addition of decoys to the ICBMs, which presented radar targets that looked the same as the warhead. These cleared away due to drag as they reentered the atmosphere, but once again, this occurred at too low an altitude to attack. [ 3 ] At the suggestion of ARPA , the Army responded by redesigning the system as Nike-X . Nike-X used a short-range but extremely high-speed interceptor known as Sprint that was optimized for interceptions under 60 kilometres (200,000 ft) and combined that with an extremely high-speed radar and computer system. The plan was to wait until the warhead cleared any blackout and the decoys were slowing, allowing the radar to pick out the warhead and attack it with the Sprint. The entire engagement would last only a few seconds. [ 4 ] The Army produced a study that considered a real deployment scenario and then estimated the number of lives it would save. They started by assuming that the Soviets would want to launch two warheads at every target, to ensure at least one would go off. In order to confuse the defense, they would add nine credible decoys to each ICBM. This would present each base with 20 radar targets in total. For the same redundancy reasons, they would launch two Sprint missiles at each one, so a total of 40 Sprints would be needed to protect every target. Given the relative costs of the Sprint and an ICBM, the Army demonstrated that the Sprint system would save a considerable number of civilian lives for less than the cost of an ICBM. [ 5 ] When this was presented as a part of a PSAC study of the Nike-X system, one member of the group immediately noted a problem. Air Force Brigadier General Glenn Kent had been taught to always consider who had the last move in any plan, and in this case, he concluded that the Soviets had that advantage. Facing a Nike-X deployment, they could change their ICBM targets without the US having any idea what those were. [ 6 ] For instance, one response would be to ignore the defended targets entirely, and use the missiles to attack the next cities on their target list. Since those targets would be smaller, they could be assigned one missile each. Although this would increase the number of targets that were not destroyed due to failures, the total number of targets hit would be greater. [ 6 ] Another solution would be to ignore targets further down the list and reassign those warheads to ones further up the list, ensuring the defenses at those targets would be overwhelmed through sheer numbers. Although the targets further down the list would no longer be attacked, they had smaller populations so their value was less. [ 6 ] In either case, the attacker could once again cause enormous damage without spending a single extra dollar on the attack . Worse, the US has no idea which strategy the Soviets picked, and therefore have no idea how to respond. The question, then, was how does one plan a defensive layout when there is no clear answer what the enemy's response will be? [ 6 ] When Kent pointed this issue out to Director of Defense Research and Engineering (DDR&E) Harold Brown , Brown immediately grasped the problem and recalled the Army group to explain why their analysis was essentially useless. [ 6 ] He then tasked Kent with coming up with a way to analyze the problem that would not be dependent on knowing the Soviet attack allocations. [ 7 ] Kent learned that two researchers at Bell Labs had considered this exact issue in a 1957 paper. Robert Prim and Thornton Read solved the problem by developing a simple mathematical formula that maximized the damage reduction in terms of any given expenditure on the defense. Prim visited Kent at the Pentagon to explain the idea, which was extremely simple in conceptual terms. [ 8 ] The basic idea was a reflection of the targeting priorities the Soviets would use. Against "soft targets" like cities, a single warhead will effectively destroy it, so launching additional warheads at the same target will not cause a corresponding doubling of damage inflicted. However, the missiles have a certain probability of successfully reaching the target and detonating, the probability of kill , or P k . If the P k is 50%, for instance, the Soviets will want to launch more than one ICBM at a target to increase the chances of destroying it. Two warheads improve this to 75%, and three to 87.5%, but in that case, if the first one does work the following two are wasted. They have to balance the desire to guarantee destruction of certain targets with the knowledge that other targets would then be skipped entirely. [ 9 ] The Prim–Read concept used the same basic logic but applied it to the chance of successfully destroying an enemy missile. For instance, if a city is expecting to be attacked by two warheads, then its chance of being destroyed is 75%. Assigning a single interceptor to defend that city means one of the two warheads will be shot down 50% of the time. This means the chance of not getting hit is now 50%, a 25% improvement. Critically, adding a second interceptor means a 50% of hitting either, a 75% chance of hitting both. The chance you hit the one that would go off is 50-50, so now the chance the target does not get hit is 62.5%. Thus adding the second interceptor only improves the survival rate by 12.5%. [ 9 ] The key point here is that instead of applying the second interceptor to improve the survival rate of that target 12.5%, it might be better to instead put that interceptor over some other target that formerly had no protection, and improve its survival rate by 50%. Of course, this requires one to put a value of some sort on the targets so one can calculate if 50% of one target is worth more than 12.5% of another. [ 9 ] Consider a real-world example in which New York is considered to have twice the "value" of Los Angeles. In this case, a naive arrangement would be to assign twice as many interceptors to New York. However, due to the P k considerations, this does not provide twice the defensive capability, but a fractional addition. In the case of large numbers of interceptors and enemy warheads, additional missiles may provide only a tiny benefit. In contrast, assigning those missiles to Los Angeles may dramatically improve its survival if it otherwise had only a few. Improving Los Angeles' survival by 25% is likely "better" than improving New York's by 12.5%. [ 8 ] The paper goes on to explain how to arrange the overall deployment. Each target is assigned a worth , W, and the price of the defense assigned to protect it is P. The ratio of W to P is λ. If one were to assign a single missile to all potential targets, then the list of resulting λ values would mirror the W values. [ 8 ] If λ is less than 1, that means the cost of defense is more than the worth of the target. [ 8 ] In this case, that target's missile is much better off being assigned to another target, the one with the highest λ. When that happens, the λ of that target drops because more P is being spent on it. As a result, another target becomes the highest on the list of λ. One then continues this process of reassigning missiles until the resulting list of targets that are protected have the same λ, or as close to that as possible. λ, in effect, represents the damage percent you are willing to accept. [ 8 ] One can make real-world calculations by selecting the population of the urban area to be a proxy for W. In this case New York has the highest W and initial λ, and it is naturally assigned the largest number of interceptors. One might be inclined to move a missile from Los Angeles to New York to offer higher protection, but the brilliance of Prim–Read is that demonstrates that while doing so would improve New York's survival rate a tiny bit, it would lower Los Angeles' even more. [ 8 ] One outcome of the Prim–Read deployment is that it is based entirely on the number of ABMs constructed and the total worth of the targets they protect. It does not matter what the Soviet response to the deployment is; if they choose to reduce the number of missiles assigned to one target to ensure they penetrate the defenses of another, that will always increase the overall survival rate of the defenders. It is possible for the Soviets to overwhelm the entire system, but even in that case the Prim–Read deployment will reduce whatever damage will be caused by the maximal amount possible. [ 8 ] With Prim–Read, one can construct a mathematically maximal defense for any given expenditure. Because that defense is probabilistic, it means that it assumes some damage even when the defense is overwhelming, and at the same time it means there will be some reduction in damage even if the attack is overwhelming. The question then becomes whether or not the amount of damage reduction desired can be achieved for a reasonable total expenditure, given various estimates of the Soviet fleet. [ 10 ] Kent began developing Prim–Read deployments of various numbers of ABMs to determine their effectiveness against various numbers of ICBMs. The results were clear. Limited amounts of protection could be offered with small expenditures even if the Soviets built huge numbers of ICBMs; by pure chance, some of the targets would not be hit and ABMs would improve those numbers. The opposite was also true; if the US built an enormous fleet of ABMs, some enemy warheads would still hit their targets purely by chance. [ 10 ] If one wanted to save 90% of the US population, one required huge numbers of ABMs, and the relative cost of the defense compared to the offense was about 1.7 times. In other words, if the Soviets spent $10 billion producing ICBMs in a given year, the US would have to spend $17 billion on ABMs. However, when they found the official exchange rate between the US Dollar and Ruble was a fiction, and the actual value was very different, the ratio inflated to 6-to-1. In this sort of regime, the USSR could easily afford to build enough missiles to overwhelm any defense the US could afford. [ 10 ] Kent presented his results to Brown, who began to have serious questions about any sort of active defense. [ 11 ] While this had no immediate effect on Nike-X planning, this was all taking place while another group was forming to consider the entire issue of the nuclear age under the direction of Frank Trinkl, part of Alain Enthoven 's group at RAND . Kent was put into the group and noted that of the twenty items they had been tasked to consider, eight of those were purely defensive and he suggested grouping them together under the topic of damage limitation. Trinkl disagreed, and when Kent continued to pester him about it, Trinkl fired him from the group. [ 11 ] Brown then tasked Kent with going ahead and considering the eight issues in his own report, and this time assigned members of the Joint Chiefs of Staff to help. The report, on the topic of "damage limitation", immediately caught the eye of Robert McNamara who "bought it lock, stock, and barrel." McNamara put his feelings on the matter succinctly, stating to Kent that "At 70 percent surviving, you say 70 percent surviving, General, that sounds pretty good. Do you know what our detractors will say? 'Only 60 million dead.'" [ 12 ] From that point on, McNamara was against any sort of large scale Nike-X deployment, and the system was ultimately canceled. The basic concept, which became known as the cost-exchange ratio , ultimately ended any large-scale ABM deployment in the United States, and led directly to the 1972 ABM Treaty . [ 12 ] This did not end well for Kent, who was blamed for this situation, with one detractor stating "There’s the man that was the genesis of the ABM Treaty, the worst of our greatest strategic disasters, the ABM Treaty of 1972." [ 12 ]
https://en.wikipedia.org/wiki/Prim–Read_theory
The Prince Albert I Medal was established by Prince Rainier of Monaco [ 1 ] in partnership with the International Association for the Physical Sciences of the Oceans . The medal was named for Prince Albert I and is given for significant work in the physical and chemical sciences of the oceans. The medal is awarded biannually by IAPSO at its Assemblies. Source: [ 2 ]
https://en.wikipedia.org/wiki/Prince_Albert_I_Medal
Prince Earl Rouse, Jr. (October 12, 1917 – August 10, 2003) [ 1 ] [ 2 ] was an American physical chemist . He received his PhD from the University of Illinois in 1941. Rouse is most famous for a 1953 publication [ 3 ] in which he introduced what is now known as the Rouse model of polymer dynamics. He was awarded the Bingham Medal in 1966 by the Society of Rheology . [ 4 ] This biographical article about an American chemist is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Prince_E._Rouse
Princes Gate Spring Water is a brand of Welsh mineral water distributed across the United Kingdom. It is owned by Nestlé . The water is sourced from a spring near the hamlet of Princes Gate near Narberth in Pembrokeshire , southwest Wales . Princes Gate was launched in August 1991, when David and Glyn Jones began bottling spring water in a micro plant on their farm in Princes Gate, Pembrokeshire, in a bid to diversify their business in the face of government quotas on milk production . [ 1 ] The initial machinery was capable of filling just 36 bottles a day; this was expanded until most of the outbuildings on the farm were being used. A bespoke bottling plant opened in 2004, and a high-speed bottling line in 2007, capable of filling 22,000 bottles an hour. A second production line capable of bottling a further 8,000 per hour was opened to meet the increasing requirements. [ 1 ] In 2013 the company produces 60million bottles of water a year. [ 2 ] In 2013 Princes Gate employed 60 staff. [ 3 ] The company has taken steps to reduce the environmental impact of its operation, including building facilities to manufacture their own bottles on site to reduce transportation costs and installing solar panels on the roof of their factory. [ 4 ] One Enercon E53 wind turbine was built on the site near Ludchurch to supply power to their bottling plant. In 2018, Nestlé purchased a majority stake in the company, which is the UK's eighth-largest producer of bottled water. [ 5 ]
https://en.wikipedia.org/wiki/Princes_Gate_Spring_Water
The Princeton Sound Lab is a research laboratory in the Department of Computer Science at Princeton University , in collaboration with the Department of Music. The Sound Lab conducts research in a variety of areas in computer music , including physical modeling , audio analysis , audio synthesis , programming languages for audio and multimedia, interactive controller design, psychoacoustics , and real-time systems for composition and performance. The lab has had support from the SONY Corporation . [ 1 ] The facility has utilised an anechoic (echo-less) chamber for research. [ 2 ] The dedicated Princeton lab was created following separation of joint research activities with Columbia University in the 1980s. [ 3 ] This music education-related article is a stub . You can help Wikipedia by expanding it . This article about a music organization is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Princeton_Sound_Lab
A principal in computer security is an entity that can be authenticated by a computer system or network . It is referred to as a security principal in Java and Microsoft literature. [ 1 ] Principals can be individual people, computers, services, computational entities such as processes and threads, or any group of such things. [ 1 ] They need to be identified and authenticated before they can be assigned rights and privileges over resources in the network. A principal typically has an associated identifier (such as a security identifier ) that allows it to be referenced for identification or assignment of properties and permissions. A principal often becomes synonymous with the credentials used to act as that principal, such as a password or (for service principals) an access token or other secrets. [ 2 ] This computer science article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Principal_(computer_security)
In geometry and linear algebra , a principal axis is a certain line in a Euclidean space associated with a ellipsoid or hyperboloid , generalizing the major and minor axes of an ellipse or hyperbola . The principal axis theorem states that the principal axes are perpendicular , and gives a constructive procedure for finding them. Mathematically, the principal axis theorem is a generalization of the method of completing the square from elementary algebra . In linear algebra and functional analysis , the principal axis theorem is a geometrical counterpart of the spectral theorem . It has applications to the statistics of principal components analysis and the singular value decomposition . In physics , the theorem is fundamental to the studies of angular momentum and birefringence . The equations in the Cartesian plane ⁠ R 2 : {\displaystyle \mathbb {R} ^{2}:} ⁠ x 2 9 + y 2 25 = 1 x 2 9 − y 2 25 = 1 {\displaystyle {\begin{aligned}{\frac {x^{2}}{9}}+{\frac {y^{2}}{25}}&=1\\[3pt]{\frac {x^{2}}{9}}-{\frac {y^{2}}{25}}&=1\end{aligned}}} define, respectively, an ellipse and a hyperbola. In each case, the x and y axes are the principal axes. This is easily seen, given that there are no cross-terms involving products xy in either expression. However, the situation is more complicated for equations like 5 x 2 + 8 x y + 5 y 2 = 1. {\displaystyle 5x^{2}+8xy+5y^{2}=1.} Here some method is required to determine whether this is an ellipse or a hyperbola . The basic observation is that if, by completing the square , the quadratic expression can be reduced to a sum of two squares then the equation defines an ellipse, whereas if it reduces to a difference of two squares then the equation represents a hyperbola: u ( x , y ) 2 + v ( x , y ) 2 = 1 (ellipse) u ( x , y ) 2 − v ( x , y ) 2 = 1 (hyperbola) . {\displaystyle {\begin{aligned}u(x,y)^{2}+v(x,y)^{2}&=1\qquad {\text{(ellipse)}}\\u(x,y)^{2}-v(x,y)^{2}&=1\qquad {\text{(hyperbola)}}.\end{aligned}}} Thus, in our example expression, the problem is how to absorb the coefficient of the cross-term 8 xy into the functions u and v . Formally, this problem is similar to the problem of matrix diagonalization , where one tries to find a suitable coordinate system in which the matrix of a linear transformation is diagonal . The first step is to find a matrix in which the technique of diagonalization can be applied. The trick is to write the quadratic form as 5 x 2 + 8 x y + 5 y 2 = [ x y ] [ 5 4 4 5 ] [ x y ] = x T A x {\displaystyle 5x^{2}+8xy+5y^{2}={\begin{bmatrix}x&y\end{bmatrix}}{\begin{bmatrix}5&4\\4&5\end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}=\mathbf {x} ^{\textsf {T}}\mathbf {Ax} } where the cross-term has been split into two equal parts. The matrix A in the above decomposition is a symmetric matrix . In particular, by the spectral theorem , it has real eigenvalues and is diagonalizable by an orthogonal matrix ( orthogonally diagonalizable ). To orthogonally diagonalize A , one must first find its eigenvalues, and then find an orthonormal eigenbasis . Calculation reveals that the eigenvalues of A are λ 1 = 1 , λ 2 = 9 {\displaystyle \lambda _{1}=1,\quad \lambda _{2}=9} with corresponding eigenvectors v 1 = [ 1 − 1 ] , v 2 = [ 1 1 ] . {\displaystyle \mathbf {v} _{1}={\begin{bmatrix}1\\-1\end{bmatrix}},\quad \mathbf {v} _{2}={\begin{bmatrix}1\\1\end{bmatrix}}.} Dividing these by their respective lengths yields an orthonormal eigenbasis: u 1 = [ 1 2 − 1 2 ] , u 2 = [ 1 2 1 2 ] . {\displaystyle \mathbf {u} _{1}={\begin{bmatrix}{\frac {1}{\sqrt {2}}}\\-{\frac {1}{\sqrt {2}}}\end{bmatrix}},\quad \mathbf {u} _{2}={\begin{bmatrix}{\frac {1}{\sqrt {2}}}\\{\frac {1}{\sqrt {2}}}\end{bmatrix}}.} Now the matrix S = [ u 1 u 2 ] is an orthogonal matrix, since it has orthonormal columns, and A is diagonalized by: A = S D S − 1 = S D S T = [ 1 2 1 2 − 1 2 1 2 ] [ 1 0 0 9 ] [ 1 2 − 1 2 1 2 1 2 ] . {\displaystyle \mathbf {A} =\mathbf {SDS} ^{-1}=\mathbf {SDS} ^{\textsf {T}}={\begin{bmatrix}{\frac {1}{\sqrt {2}}}&{\frac {1}{\sqrt {2}}}\\-{\frac {1}{\sqrt {2}}}&{\frac {1}{\sqrt {2}}}\end{bmatrix}}{\begin{bmatrix}1&0\\0&9\end{bmatrix}}{\begin{bmatrix}{\frac {1}{\sqrt {2}}}&-{\frac {1}{\sqrt {2}}}\\{\frac {1}{\sqrt {2}}}&{\frac {1}{\sqrt {2}}}\end{bmatrix}}.} This applies to the present problem of "diagonalizing" the quadratic form through the observation that 5 x 2 + 8 x y + 5 y 2 = x T A x = x T ( S D S T ) x = ( S T x ) T D ( S T x ) = 1 ( x − y 2 ) 2 + 9 ( x + y 2 ) 2 . {\displaystyle {\begin{aligned}5x^{2}+8xy+5y^{2}&=\mathbf {x} ^{\textsf {T}}\mathbf {Ax} \\&=\mathbf {x} ^{\textsf {T}}\left(\mathbf {SDS} ^{\textsf {T}}\right)\mathbf {x} \\&=\left(\mathbf {S} ^{\textsf {T}}\mathbf {x} \right)^{\textsf {T}}\mathbf {D} \left(\mathbf {S} ^{\textsf {T}}\mathbf {x} \right)\\&=1\left({\frac {x-y}{\sqrt {2}}}\right)^{2}+9\left({\frac {x+y}{\sqrt {2}}}\right)^{2}.\end{aligned}}} Thus, the equation 5 x 2 + 8 x y + 5 y 2 = 1 {\displaystyle 5x^{2}+8xy+5y^{2}=1} is that of an ellipse, since the left side can be written as the sum of two squares. It is tempting to simplify this expression by pulling out factors of 2. However, it is important not to do this. The quantities c 1 = x − y 2 , c 2 = x + y 2 {\displaystyle c_{1}={\frac {x-y}{\sqrt {2}}},\quad c_{2}={\frac {x+y}{\sqrt {2}}}} have a geometrical meaning. They determine an orthonormal coordinate system on ⁠ R 2 . {\displaystyle \mathbb {R} ^{2}.} ⁠ In other words, they are obtained from the original coordinates by the application of a rotation (and possibly a reflection). Consequently, one may use the c 1 and c 2 coordinates to make statements about length and angles (particularly length), which would otherwise be more difficult in a different choice of coordinates (by rescaling them, for instance). For example, the maximum distance from the origin on the ellipse c 1 2 + 9 c 2 2 = 1 {\displaystyle c_{1}^{2}+9c_{2}^{2}=1} occurs when c 2 = 0 , so at the points c 1 = ±1 . Similarly, the minimum distance is where c 2 = ±1/3 . It is possible now to read off the major and minor axes of this ellipse. These are precisely the individual eigenspaces of the matrix A , since these are where c 2 = 0 or c 1 = 0 . Symbolically, the principal axes are E 1 = span ⁡ ( [ 1 2 − 1 2 ] ) , E 2 = span ⁡ ( [ 1 2 1 2 ] ) . {\displaystyle E_{1}=\operatorname {span} \left({\begin{bmatrix}{\frac {1}{\sqrt {2}}}\\-{\frac {1}{\sqrt {2}}}\end{bmatrix}}\right),\quad E_{2}=\operatorname {span} \left({\begin{bmatrix}{\frac {1}{\sqrt {2}}}\\{\frac {1}{\sqrt {2}}}\end{bmatrix}}\right).} To summarize: Using this information, it is possible to attain a clear geometrical picture of the ellipse: to graph it, for instance. The principal axis theorem concerns quadratic forms in ⁠ R n , {\displaystyle \mathbb {R} ^{n},} ⁠ which are homogeneous polynomials of degree 2. Any quadratic form may be represented as Q ( x ) = x T A x {\displaystyle Q(\mathbf {x} )=\mathbf {x} ^{\textsf {T}}\mathbf {Ax} } where A is a symmetric matrix. The first part of the theorem is contained in the following statements guaranteed by the spectral theorem: In particular, A is orthogonally diagonalizable , since one may take a basis of each eigenspace and apply the Gram-Schmidt process separately within the eigenspace to obtain an orthonormal eigenbasis. For the second part, suppose that the eigenvalues of A are λ 1 , ..., λ n (possibly repeated according to their algebraic multiplicities ) and the corresponding orthonormal eigenbasis is u 1 , ..., u n . Then, c = [ u 1 , … , u n ] T x , {\displaystyle \mathbf {c} =[\mathbf {u} _{1},\ldots ,\mathbf {u} _{n}]^{\textsf {T}}\mathbf {x} ,} and Q ( x ) = λ 1 c 1 2 + λ 2 c 2 2 + ⋯ + λ n c n 2 , {\displaystyle Q(\mathbf {x} )=\lambda _{1}c_{1}^{2}+\lambda _{2}c_{2}^{2}+\dots +\lambda _{n}c_{n}^{2},} where c i is the i -th entry of c . Furthermore,
https://en.wikipedia.org/wiki/Principal_axis_theorem
In mathematics and, more specifically, in theory of equations , the principal form of an irreducible polynomial of degree at least three is a polynomial of the same degree n without terms of degrees n −1 and n −2, such that each root of either polynomial is a rational function of a root of the other polynomial. The principal form of a polynomial can be found by applying a suitable Tschirnhaus transformation to the given polynomial. Let be an irreducible polynomial of degree at least three. Its principal form is a polynomial together with a Tschirnhaus transformation of degree two such that, if r is a root of f , ϕ ( r ) {\displaystyle \phi (r)} is a root of ⁠ g {\displaystyle g} ⁠ . [ 1 ] [ 2 ] Expressing that ⁠ g {\displaystyle g} ⁠ does not has terms in ⁠ y n − 1 {\displaystyle y^{n-1}} ⁠ and ⁠ y n − 2 {\displaystyle y^{n-2}} ⁠ leads to a system of two equations in ⁠ α {\displaystyle \alpha } ⁠ and ⁠ β {\displaystyle \beta } ⁠ , one of degree one and one of degree two. In general, this system has two solutions, giving two principal forms involving a square root. One passes from one principal form to the secong by changing the sign of the square root. [ 3 ] [ 4 ] The Tschirnhaus transformation always transforms one polynome into another polynome of the same degree but with a different unknown variable. The mathematical relation of the new variable to the old variable shall be called the Tschirnhaus key. This key is a polynome that has to satisfy special criteria about its coefficients. To fulfill these criteria a separate equation system of several unknowns has to be solved. The singular equations of that system are important clues that are composed in tables that are formulated in the following sections: This is the given cubic equation: Following quadratic equation system shall be solved: So exactly this Tschirnhaus transformation appears: The solutions of this system, accurately the expression of u, v and w in terms of a, b and c can be found out by the substitution method. It means for instance, the first of the three chested equations can be resolved after the unknown v and this resolved equation can be inserted into the second chested equation, so that a quadratic equation after the unknown u appears. In this way, from the three to be solved unknowns only one unknown remains and can be solved directly. By finding out the first unknown, the further unknowns can be found out by inserting the computed unknown. By detecting all these unknown coefficients the mentioned Tschirnhaus key and the new polynome resulting from the mentioned transformation can be constructed. In this way the Tschirnhaus transformation [ 5 ] is done. The quadratic radical components [ 6 ] of the coefficients are identical to the square root terms appearing along with the Cardano theorem and therefore the Cubic Tschirnhaus transformation even can be used to derive the general Cardano formula itself. Plastic constant: Supergolden constant: Tribonacci constant: The direct solving of the mentioned system of three clues leads to the Cardano formula for the mentioned case: This is the given quartic equation: Now this quadratic equation system shall be solved: And so accurately that Tschirnhaus transformation appears: The Tschirnhaus transformation of the equation for the Tetranacci constant contains only rational coefficients: In this way following expression can be made about the Tetranacci constant: That calculation example however does contain the element of the square root in the Tschirnhaus transformation: In the following we solve a special equation pattern that is easily solvable by using elliptic functions: These are important additional informations about the elliptic nome and the mentioned Jacobi theta function: Computation rule for the mentioned theta quotient: Accurately the Jacobi theta function is used for solving that equation. Now we create a Tschirnhaus transformation on that: Given principal quartic equation: If this equation pattern is given, the modulus tangent duplication value S can be determined in this way: The solution of the now mentioned formula always is in pure biquadratic radical relation to psi and omega and therefore it is a useful tool to solve principal quartic equations. And this can be solved in that way: Now this solving pattern shall be used for solving some principal quartic equations: First calculation example: Second calculation example: Third calculation example: This is the given quintic equation: That quadratic equation system leads to the coefficients of the quadratic Tschirnhaus key: By polynomial division that Tschirnhaus transformation can be made: This is the first example: And this is the second example: The mathematicians Victor Adamchik and David Jeffrey found out how to solve every principal quintic equation. In their essay [ 7 ] Polynomial Transformations of Tschirnhaus, Bring and Jerrard they wrote this way down. These two mathematicians solved this principal form by transforming it into the Bring Jerrard [ 8 ] form. Their method contains the construction of a quartic Tschirnhaus transformation key. Also in this case that key is a polynome in relation to the unknown variable of the given principal equation y that results in the unknown variable z of the transformed Bring Jerrard final equation. For the construction of the mentioned Tschirnhaus transformation key they executed a disjunction of the linear term key coefficient in order to get a system that solves all other terms in a quadratic radical way and to only solve a further cubic equation [ 9 ] to get the coefficient of the linear term key coefficient. In their essay they constructed the quartic Tschirnhaus key in this way: In order to do the transformation the mathematicians Adamchik and Jeffrey constructed a special equation system that generates the coefficients of the cubic, quadratic and absolute term coefficients of the Tschirnhaus key. Along with their essay of polynomial transformations, these coefficients can be found out by combining the expressions of the quartic and cubic term of the final Bring Jerrard form that are equal to zero because in this way the Bring Jerrard equation form is defined. By combining these expressions of the zero valued quartic and cubic term of the Bring Jerrard final form, an equation system for the unknown Tschirnhaus key coefficients can be constructed. And this resulting equation system can be simplified by combining the equation clues in the essay into each other. In this way the following simplified equation system of two unknown key coefficients can be set up: On the basis of the essay by Adamchik and Jeffrey, the just mentioned equation system of two unknowns results from setting the zero valued quartic coefficient of the Bring Jerrard final form into the zero valued cubic coefficient and eliminating all terms of the linear key coefficients and absolute key coefficients. In other words, eliminating all gamma and delta terms. In this way you get the red colored cubic term coefficient and the green colored quadratic term coefficient of the Tschirnhaus key. The mentioned zero valued quartic coefficient of the Bring Jerrard final form is accurately this one here: Solving the zero valued quartic coefficient of the Bring Jerrard final form leads directly to the blue colored absolute term coefficient of the Tschirnhaus key. And for receiving the orange colored linear term coefficient of the Tschirnhaus key, the zero valued quadratic coefficient of the Bring Jerrard final form must be solved after the mentioned linear term coefficient of the Tschirnhaus key. And accurate that is done by solving this cubic equation: The solution of that system then has to be entered in the already mentioned key to get the mentioned final form: The coefficients Lambda and My ofthe Bring Jerrard final form can be found out by doing a polynomial division of z^5 divided by the initial principal polynome and reading the resulting remainder rest. So a Bring Jerrard equation appears that contains only the quintic, the linear and the absolute term. Along with the Abel Ruffini theorem the following equations are examples that can not be solved by elementary expressions, but can be reduced [ 10 ] to the Bring Jerrard form by only using cubic radical elements. This shall be demonstrated here. To do this on the given principal quintics, we solve the equations for the coefficients of the cubic, quadratic and absolute term of the quartic Tschirnhaus key after the shown pattern. So this Tschirnhaus key can be determinded. By doing a polynomial division on the fifth power of the quartic Tschirnhaus transformation key and analyzing the remainder rest the coefficients of the mold can be determined too. And so the solutions of following given principal quintic equations can be computed: This is a further example for that algorithm: That Bring Jerrard equation can be solved by an elliptic Jacobi theta quotient that contains the fifth powers and the fifth roots of the corresponding elliptic nome in the theta function terms. For doing this, following elliptic modulus or numeric eccentricity and their Pythagorean counterparts and corresponding elliptic nome should be used in relation to Lambda and My after the essay Sulla risoluzione delle equazioni del quinto grado from Charles Hermite and Francesco Brioschi and the recipe on page 258 accurately: These are the elliptic moduli and thus the numeric eccentricities: With the abbreviations ctlh abd tlh the Hyperbolic Lemniscatic functions are represented. The abbreviation aclh is the Hyperbolic Lemniscate Areacosine accurately.
https://en.wikipedia.org/wiki/Principal_form_of_a_polynomial
In geometric data analysis and statistical shape analysis , principal geodesic analysis is a generalization of principal component analysis to a non-Euclidean , non-linear setting of manifolds suitable for use with shape descriptors such as medial representations . This statistics -related article is a stub . You can help Wikipedia by expanding it . This geometry-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Principal_geodesic_analysis
Principal interacting orbital ( PIO ), based on quantum chemical calculations, provides chemists with visualization of a set of semi-localized dominant interacting orbitals. [ 1 ] The method offers additional perspective to molecular orbitals (MO) obtained from quantum chemical calculations ( DFT for instance), which often provide extensively delocalized orbitals that are hard to interpret and relate with chemists' intuition on electronic structures and orbital interactions. Several other efforts have been made to help visualize semi-localized dominant interacting orbitals that represents well chemists' intuition, while maintaining the mathematical rigorosity. Notable examples include the natural atomic orbitals (NAO), natural bond orbitals (NBO), [ 2 ] charge decomposition analysis (CDA), [ 3 ] and adaptive natural density partitioning (AdNDP). [ 4 ] PIO analysis uniquely provides semi-localized MOs that are chemically accurate (i.e., not always produces 2-center-2-electron localized orbitals, continuous evolution of PIOs along potential energy surface, etc.) and easy to interpret. A typical workflow is summarized here. For details, please refer to the reference [ 1 ] or consult the website. [ 5 ] The PIO analysis is based on the statistical method principal component analysis (PCA). [ 1 ] [ 6 ] The Diels-Alder reaction of hexadeca-1,3,5,7,9,11,13,15-octaene and ethylene can be thought of as a [4+2] reaction between a substituted diene and a dienophile. The frontier molecular orbitals produced by a typical structural optimization are as follows: the HOMO and LUMO of the dienophile "ethylene" are two-centered, while the HOMO and the LUMO of the substituted diene "hexadeca-1,3,5,7,9,11,13,15-octaene" are delocalized over the entire molecule. This is different from chemists' traditional depiction of the Diels-Alder reaction: the HOMO (two-centered) of the dienophile interacts with the LUMO of the diene (four-centered), and the LUMO (two-centered) of the dienophile interacts with the HOMO of the diene (four-centered). The computed delocalized HOMO and LUMO in hexadeca-1,3,5,7,9,11,13,15-octaene makes it hard for chemists to make useful interpretations. On the other hand, the dominant PIOs from PIO analysis resemble the HOMO/LUMO (four-centered) of an unsubstituted butadiene. This highlights an advantage of PIO calculation—it localizes the orbitals to the reactive part and preserves the multi-centered feature. Another feature of PIO calculation that must be highlighted is that the first two principal orbital interactions—which resembles the interaction of the HOMO of the diene and the LUMO of the dienophile, and the interaction of the LUMO of the diene and the HOMO of the dienophile—sums to over 95% of the total orbital interaction between the two fragments. PIO analysis with intrinsic reaction coordinate (IRC) calculation gives continuous results. The continuality extends to the evolution of the shape of the PIOs and their percentage of contribution to the overall orbital interaction. This is another advantage of PIO analysis over other methods to obtain localized electronic structures such as NBO and AdNDP. The other methods require predefined parameters and often lead to ambiguous chemical structures and unphysical discontinuity. For instance, when the Diels-Alder reaction is analyzed with IRC and NBO, (1) the orbitals on the diene are described as two-center-two-electron bonds, and (2) the result is not continuous—three pi bonds would suddenly switch to three newly formed bonds. Further, PIO tracing of reaction coordinate can reveal other properties such as the electronic demand of a Diels-Alder reaction. For a normal demand DA reaction (EDG on diene and EWG on dienophile), PIO analysis shows that the reaction is dominated with contribution from the HOMO of the diene and the LUMO of the dienophile. For a reverse demand DA (EWG on diene and EDG on dienophile), PIO analysis shows that the reaction is dominated with contribution from the LUMO of the diene and the HOMO of the dienophile. On the other hand, for a neutral demand DA, contributions from the diene-HOMO/dienophile-LUMO and diene-LUMO/dienophile-HOMO are roughly equal. PIO can also be used to describe transition metal compounds, which are often more complicated to analyze than main group compounds due to more possible bonding patterns. [ 6 ] A classic example is Zeise's salt , which is usually described with the Dewar-Chatt-Duncanson (DCD) model. [ 7 ] [ 8 ] C 2 H 4 donates its pi electrons to the empty orbital of Pt, while its π* orbital accepts electrons from Pt. The semilocalized bonding cannot be adequately described with methods such as NBO (localized two-center-two-electron) and CMO (delocalized over the entire molecule). On the other hand, PIO analysis produces a model that is in best agreement with our chemical intuition. The top two PIOs sums to over 90% of the overall orbital contribution. The first PIO pair is between the d z2 orbital of the metal and the pi orbital of ethylene. The second PIO pair is between the d xz orbital of the metal and the π* orbital of ethylene. PIO analysis of [Re 2 Cl 8 ] 2- four primary orbital interactions, which corresponds to the quadruple bond (one σ, two π, and one δ). [ 9 ] [ 10 ]
https://en.wikipedia.org/wiki/Principal_interacting_orbital
In quantum mechanics , the principal quantum number ( n ) of an electron in an atom indicates which electron shell or energy level it is in. Its values are natural numbers (1, 2, 3, ...). Hydrogen and Helium , at their lowest energies, have just one electron shell. Lithium through Neon (see periodic table ) have two shells: two electrons in the first shell, and up to 8 in the second shell. Larger atoms have more shells. The principal quantum number is one of four quantum numbers assigned to each electron in an atom to describe the quantum state of the electron. The other quantum numbers for bound electrons are the total angular momentum of the orbit ℓ , the angular momentum in the z direction ℓ z , and the spin of the electron s . As n increases, the electron is also at a higher energy and is, therefore, less tightly bound to the nucleus. For higher n , the electron is farther from the nucleus, on average . For each value of n , there are n accepted ℓ (azimuthal) values ranging from 0 to n − 1 {\displaystyle n-1} inclusively; hence, higher- n electron states are more numerous. [ citation needed ] Accounting for two states of spin, each n - shell can accommodate up to 2 n 2 electrons. In a simplistic one-electron model described below, the total energy of an electron is a negative inverse quadratic function of the principal quantum number n , leading to degenerate energy levels for each n > 1. [ 1 ] In more complex systems—those having forces other than the nucleus–electron Coulomb force —these levels split . For multielectron atoms this splitting results in "subshells" parametrized by ℓ . Description of energy levels based on n alone gradually becomes inadequate for atomic numbers starting from 5 ( boron ) and fails completely on potassium ( Z = 19) and afterwards. The principal quantum number was first created for use in the semiclassical Bohr model of the atom , distinguishing between different energy levels. With the development of modern quantum mechanics, the simple Bohr model was replaced with a more complex theory of atomic orbitals . However, the modern theory still requires the principal quantum number. There is a set of quantum numbers associated with the energy states of the atom. The four quantum numbers n , ℓ , m , and s specify the complete and unique quantum state of a single electron in an atom, called its wave function or orbital . Two electrons belonging to the same atom cannot have the same values for all four quantum numbers, due to the Pauli exclusion principle . The Schrödinger wave equation reduces to the three equations that when solved lead to the first three quantum numbers. Therefore, the equations for the first three quantum numbers are all interrelated. The principal quantum number arose in the solution of the radial part of the wave equation as shown below. The Schrödinger wave equation describes energy eigenstates with corresponding real numbers E n and a definite total energy, the value of E n . The bound state energies of the electron in the hydrogen atom are given by: E n = E 1 n 2 = − 13.6 eV n 2 , n = 1 , 2 , 3 , … {\displaystyle E_{n}={\frac {E_{1}}{n^{2}}}={\frac {-13.6{\text{ eV}}}{n^{2}}},\quad n=1,2,3,\ldots } The parameter n can take only positive integer values. The concept of energy levels and notation were taken from the earlier Bohr model of the atom . Schrödinger's equation developed the idea from a flat two-dimensional Bohr atom to the three-dimensional wavefunction model. In the Bohr model, the allowed orbits were derived from quantized (discrete) values of orbital angular momentum , L according to the equation L = n ⋅ ℏ = n ⋅ h 2 π {\displaystyle L=n\cdot \hbar =n\cdot {h \over 2\pi }} where n = 1, 2, 3, ... and is called the principal quantum number, and h is the Planck constant . This formula is not correct in quantum mechanics as the angular momentum magnitude is described by the azimuthal quantum number , but the energy levels are accurate and classically they correspond to the sum of potential and kinetic energy of the electron. The principal quantum number n represents the relative overall energy of each orbital. The energy level of each orbital increases as its distance from the nucleus increases. The sets of orbitals with the same n value are often referred to as an electron shell. The minimum energy exchanged during any wave–matter interaction is the product of the wave frequency multiplied by the Planck constant . This causes the wave to display particle-like packets of energy called quanta . The difference between energy levels that have different n determine the emission spectrum of the element. In the notation of the periodic table, the main shells of electrons are labeled: based on the principal quantum number. The principal quantum number is related to the radial quantum number, n r , by: n = n r + ℓ + 1 {\displaystyle n=n_{r}+\ell +1} where ℓ is the azimuthal quantum number and n r is equal to the number of nodes in the radial wavefunction. The definite total energy for a particle motion in a common Coulomb field and with a discrete spectrum , is given by: E n = − Z 2 ℏ 2 2 m 0 a 0 2 n 2 = − Z 2 e 4 m 0 2 ℏ 2 n 2 , {\displaystyle E_{n}=-{\frac {Z^{2}\hbar ^{2}}{2m_{0}a_{0}^{2}n^{2}}}=-{\frac {Z^{2}e^{4}m_{0}}{2\hbar ^{2}n^{2}}},} where a 0 {\displaystyle a_{0}} is the Bohr radius . This discrete energy spectrum resulted from the solution of the quantum mechanical problem on the electron motion in the Coulomb field, coincides with the spectrum that was obtained with the help application of the Bohr–Sommerfeld quantization rules to the classical equations. The radial quantum number determines the number of nodes of the radial wave function R ( r ). [ 2 ] In chemistry , values n = 1, 2, 3, 4, 5, 6, 7 are used in relation to the electron shell theory, with expected inclusion of n = 8 (and possibly 9) for yet-undiscovered period 8 elements . In atomic physics , higher n sometimes occur for description of excited states . Observations of the interstellar medium reveal atomic hydrogen spectral lines involving n on order of hundreds; values up to 766 [ 3 ] were detected.
https://en.wikipedia.org/wiki/Principal_quantum_number
In mathematics , a principal n -th root of unity (where n is a positive integer ) of a ring is an element α {\displaystyle \alpha } satisfying the equations In an integral domain , every primitive n -th root of unity is also a principal n {\displaystyle n} -th root of unity. In any ring, if n is a power of 2 , then any n /2-th root of −1 is a principal n -th root of unity. A non-example is 3 {\displaystyle 3} in the ring of integers modulo 26 {\displaystyle 26} ; while 3 3 ≡ 1 ( mod 26 ) {\displaystyle 3^{3}\equiv 1{\pmod {26}}} and thus 3 {\displaystyle 3} is a cube root of unity , 1 + 3 + 3 2 ≡ 13 ( mod 26 ) {\displaystyle 1+3+3^{2}\equiv 13{\pmod {26}}} meaning that it is not a principal cube root of unity. The significance of a root of unity being principal is that it is a necessary condition for the theory of the discrete Fourier transform to work out correctly. This polynomial -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Principal_root_of_unity
In mathematics, a principal subalgebra of a complex simple Lie algebra is a 3-dimensional simple subalgebra whose non-zero elements are regular . A finite-dimensional complex simple Lie algebra has a unique conjugacy class of principal subalgebras, each of which is the span of an sl 2 -triple . This algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Principal_subalgebra
Principle , in chemistry, refers to a historical concept of the constituents of a substance, specifically those that produce a certain quality or effect in the substance, such as a bitter principle , which is any one of the numerous compounds having a bitter taste. The idea of chemical principles developed out of the classical elements . Paracelsus identified the tria prima as principles in his approach to medicine . In his book The Sceptical Chymist of 1661, Robert Boyle criticized the traditional understanding of the composition of materials and initiated the modern understanding of chemical elements . Nevertheless, the concept of chemical principles continued to be used. Georg Ernst Stahl published Philosophical Principles of Universal Chemistry in 1730 as an early effort to distinguish between mixtures and compounds . He writes, "the simple are Principles , or the first material causes of Mixts ;..." [ 1 ] : 3 To define a Principle, he wrote Stahl recounts theories of chemical principles according to Helmont and J. J. Becher . He says Helmont took Water to be the "first and only material Principle of all things." According to Becher, Water and Earth are principles, where Earth is distinguished into three kinds. [ 1 ] : 5 Stahl also ascribes to Earth the "principle of rest and aggregation ." [ 1 ] : 65 Historians have described how early analysts used Principles to classify substances: Guillaume-François Rouelle "attributed two functions to principles: that of forming mixts and that of being an agent or instrument of chemical principles." [ 2 ] : 61
https://en.wikipedia.org/wiki/Principle_(chemistry)
In mathematical statistics , the Kullback–Leibler ( KL ) divergence (also called relative entropy and I-divergence [ 1 ] ), denoted D KL ( P ∥ Q ) {\displaystyle D_{\text{KL}}(P\parallel Q)} , is a type of statistical distance : a measure of how much a model probability distribution Q is different from a true probability distribution P . [ 2 ] [ 3 ] Mathematically, it is defined as D KL ( P ∥ Q ) = ∑ x ∈ X P ( x ) log ⁡ P ( x ) Q ( x ) . {\displaystyle D_{\text{KL}}(P\parallel Q)=\sum _{x\in {\mathcal {X}}}P(x)\,\log {\frac {P(x)}{Q(x)}}.} A simple interpretation of the KL divergence of P from Q is the expected excess surprise from using Q as a model instead of P when the actual distribution is P . While it is a measure of how different two distributions are and is thus a distance in some sense, it is not actually a metric , which is the most familiar and formal type of distance. In particular, it is not symmetric in the two distributions (in contrast to variation of information ), and does not satisfy the triangle inequality . Instead, in terms of information geometry , it is a type of divergence , [ 4 ] a generalization of squared distance , and for certain classes of distributions (notably an exponential family ), it satisfies a generalized Pythagorean theorem (which applies to squared distances). [ 5 ] Relative entropy is always a non-negative real number , with value 0 if and only if the two distributions in question are identical. It has diverse applications, both theoretical, such as characterizing the relative (Shannon) entropy in information systems, randomness in continuous time-series , and information gain when comparing statistical models of inference ; and practical, such as applied statistics, fluid mechanics , neuroscience , bioinformatics , and machine learning . Consider two probability distributions P and Q . Usually, P represents the data, the observations, or a measured probability distribution. Distribution Q represents instead a theory, a model, a description or an approximation of P . The Kullback–Leibler divergence D KL ( P ∥ Q ) {\displaystyle D_{\text{KL}}(P\parallel Q)} is then interpreted as the average difference of the number of bits required for encoding samples of P using a code optimized for Q rather than one optimized for P . Note that the roles of P and Q can be reversed in some situations where that is easier to compute, such as with the expectation–maximization algorithm (EM) and evidence lower bound (ELBO) computations. The relative entropy was introduced by Solomon Kullback and Richard Leibler in Kullback & Leibler (1951) as "the mean information for discrimination between H 1 {\displaystyle H_{1}} and H 2 {\displaystyle H_{2}} per observation from μ 1 {\displaystyle \mu _{1}} ", [ 6 ] where one is comparing two probability measures μ 1 , μ 2 {\displaystyle \mu _{1},\mu _{2}} , and H 1 , H 2 {\displaystyle H_{1},H_{2}} are the hypotheses that one is selecting from measure μ 1 , μ 2 {\displaystyle \mu _{1},\mu _{2}} (respectively). They denoted this by I ( 1 : 2 ) {\displaystyle I(1:2)} , and defined the "'divergence' between μ 1 {\displaystyle \mu _{1}} and μ 2 {\displaystyle \mu _{2}} " as the symmetrized quantity J ( 1 , 2 ) = I ( 1 : 2 ) + I ( 2 : 1 ) {\displaystyle J(1,2)=I(1:2)+I(2:1)} , which had already been defined and used by Harold Jeffreys in 1948. [ 7 ] In Kullback (1959) , the symmetrized form is again referred to as the "divergence", and the relative entropies in each direction are referred to as a "directed divergences" between two distributions; [ 8 ] Kullback preferred the term discrimination information . [ 9 ] The term "divergence" is in contrast to a distance (metric), since the symmetrized divergence does not satisfy the triangle inequality. [ 10 ] Numerous references to earlier uses of the symmetrized divergence and to other statistical distances are given in Kullback (1959 , pp. 6–7, §1.3 Divergence). The asymmetric "directed divergence" has come to be known as the Kullback–Leibler divergence, while the symmetrized "divergence" is now referred to as the Jeffreys divergence . For discrete probability distributions P and Q defined on the same sample space , X {\displaystyle {\mathcal {X}}} , the relative entropy from Q to P is defined [ 11 ] to be D KL ( P ∥ Q ) = ∑ x ∈ X P ( x ) log ⁡ P ( x ) Q ( x ) , {\displaystyle D_{\text{KL}}(P\parallel Q)=\sum _{x\in {\mathcal {X}}}P(x)\,\log {\frac {P(x)}{Q(x)}}\,,} which is equivalent to D KL ( P ∥ Q ) = − ∑ x ∈ X P ( x ) log ⁡ Q ( x ) P ( x ) . {\displaystyle D_{\text{KL}}(P\parallel Q)=-\sum _{x\in {\mathcal {X}}}P(x)\,\log {\frac {Q(x)}{P(x)}}\,.} In other words, it is the expectation of the logarithmic difference between the probabilities P and Q , where the expectation is taken using the probabilities P . Relative entropy is only defined in this way if, for all x , Q ( x ) = 0 {\displaystyle Q(x)=0} implies P ( x ) = 0 {\displaystyle P(x)=0} ( absolute continuity ). Otherwise, it is often defined as + ∞ {\displaystyle +\infty } , [ 1 ] but the value + ∞ {\displaystyle \ +\infty \ } is possible even if Q ( x ) ≠ 0 {\displaystyle Q(x)\neq 0} everywhere, [ 12 ] [ 13 ] provided that X {\displaystyle {\mathcal {X}}} is infinite in extent. Analogous comments apply to the continuous and general measure cases defined below. Whenever P ( x ) {\displaystyle P(x)} is zero the contribution of the corresponding term is interpreted as zero because lim x → 0 + x log ⁡ ( x ) = 0 . {\displaystyle \lim _{x\to 0^{+}}x\,\log(x)=0\,.} For distributions P and Q of a continuous random variable , relative entropy is defined to be the integral [ 14 ] D KL ( P ∥ Q ) = ∫ − ∞ ∞ p ( x ) log ⁡ p ( x ) q ( x ) d x , {\displaystyle D_{\text{KL}}(P\parallel Q)=\int _{-\infty }^{\infty }p(x)\,\log {\frac {p(x)}{q(x)}}\,dx\,,} where p and q denote the probability densities of P and Q . More generally, if P and Q are probability measures on a measurable space X , {\displaystyle {\mathcal {X}}\,,} and P is absolutely continuous with respect to Q , then the relative entropy from Q to P is defined as D KL ( P ∥ Q ) = ∫ x ∈ X log ⁡ P ( d x ) Q ( d x ) P ( d x ) , {\displaystyle D_{\text{KL}}(P\parallel Q)=\int _{x\in {\mathcal {X}}}\log {\frac {P(dx)}{Q(dx)}}\,P(dx)\,,} where P ( d x ) Q ( d x ) {\displaystyle {\frac {P(dx)}{Q(dx)}}} is the Radon–Nikodym derivative of P with respect to Q , i.e. the unique Q almost everywhere defined function r on X {\displaystyle {\mathcal {X}}} such that P ( d x ) = r ( x ) Q ( d x ) {\displaystyle P(dx)=r(x)Q(dx)} which exists because P is absolutely continuous with respect to Q . Also we assume the expression on the right-hand side exists. Equivalently (by the chain rule ), this can be written as D KL ( P ∥ Q ) = ∫ x ∈ X P ( d x ) Q ( d x ) log ⁡ P ( d x ) Q ( d x ) Q ( d x ) , {\displaystyle D_{\text{KL}}(P\parallel Q)=\int _{x\in {\mathcal {X}}}{\frac {P(dx)}{Q(dx)}}\ \log {\frac {P(dx)}{Q(dx)}}\ Q(dx)\,,} which is the entropy of P relative to Q . Continuing in this case, if μ {\displaystyle \mu } is any measure on X {\displaystyle {\mathcal {X}}} for which densities p and q with P ( d x ) = p ( x ) μ ( d x ) {\displaystyle P(dx)=p(x)\mu (dx)} and Q ( d x ) = q ( x ) μ ( d x ) {\displaystyle Q(dx)=q(x)\mu (dx)} exist (meaning that P and Q are both absolutely continuous with respect to μ {\displaystyle \mu } ), then the relative entropy from Q to P is given as D KL ( P ∥ Q ) = ∫ x ∈ X p ( x ) log ⁡ p ( x ) q ( x ) μ ( d x ) . {\displaystyle D_{\text{KL}}(P\parallel Q)=\int _{x\in {\mathcal {X}}}p(x)\,\log {\frac {p(x)}{q(x)}}\ \mu (dx)\,.} Note that such a measure μ {\displaystyle \mu } for which densities can be defined always exists, since one can take μ = 1 2 ( P + Q ) {\textstyle \mu ={\frac {1}{2}}\left(P+Q\right)} although in practice it will usually be one that applies in the context like counting measure for discrete distributions, or Lebesgue measure or a convenient variant thereof like Gaussian measure or the uniform measure on the sphere , Haar measure on a Lie group etc. for continuous distributions. The logarithms in these formulae are usually taken to base 2 if information is measured in units of bits , or to base e if information is measured in nats . Most formulas involving relative entropy hold regardless of the base of the logarithm. Various conventions exist for referring to D KL ( P ∥ Q ) {\displaystyle D_{\text{KL}}(P\parallel Q)} in words. Often it is referred to as the divergence between P and Q , but this fails to convey the fundamental asymmetry in the relation. Sometimes, as in this article, it may be described as the divergence of P from Q or as the divergence from Q to P . This reflects the asymmetry in Bayesian inference , which starts from a prior Q and updates to the posterior P . Another common way to refer to D KL ( P ∥ Q ) {\displaystyle D_{\text{KL}}(P\parallel Q)} is as the relative entropy of P with respect to Q or the information gain from P over Q . Kullback [ 3 ] gives the following example (Table 2.1, Example 2.1). Let P and Q be the distributions shown in the table and figure. P is the distribution on the left side of the figure, a binomial distribution with N = 2 {\displaystyle N=2} and p = 0.4 {\displaystyle p=0.4} . Q is the distribution on the right side of the figure, a discrete uniform distribution with the three possible outcomes x = 0 , 1 , 2 (i.e. X = { 0 , 1 , 2 } {\displaystyle {\mathcal {X}}=\{0,1,2\}} ), each with probability p = 1 / 3 {\displaystyle p=1/3} . Relative entropies D KL ( P ∥ Q ) {\displaystyle D_{\text{KL}}(P\parallel Q)} and D KL ( Q ∥ P ) {\displaystyle D_{\text{KL}}(Q\parallel P)} are calculated as follows. This example uses the natural log with base e , designated ln to get results in nats (see units of information ): D KL ( P ∥ Q ) = ∑ x ∈ X P ( x ) ln ⁡ P ( x ) Q ( x ) = 9 25 ln ⁡ 9 / 25 1 / 3 + 12 25 ln ⁡ 12 / 25 1 / 3 + 4 25 ln ⁡ 4 / 25 1 / 3 = 1 25 ( 32 ln ⁡ 2 + 55 ln ⁡ 3 − 50 ln ⁡ 5 ) ≈ 0.0852996 , {\displaystyle {\begin{aligned}D_{\text{KL}}(P\parallel Q)&=\sum _{x\in {\mathcal {X}}}P(x)\,\ln {\frac {P(x)}{Q(x)}}\\&={\frac {9}{25}}\ln {\frac {9/25}{1/3}}+{\frac {12}{25}}\ln {\frac {12/25}{1/3}}+{\frac {4}{25}}\ln {\frac {4/25}{1/3}}\\&={\frac {1}{25}}\left(32\ln 2+55\ln 3-50\ln 5\right)\\&\approx 0.0852996,\end{aligned}}} D KL ( Q ∥ P ) = ∑ x ∈ X Q ( x ) ln ⁡ Q ( x ) P ( x ) = 1 3 ln ⁡ 1 / 3 9 / 25 + 1 3 ln ⁡ 1 / 3 12 / 25 + 1 3 ln ⁡ 1 / 3 4 / 25 = 1 3 ( − 4 ln ⁡ 2 − 6 ln ⁡ 3 + 6 ln ⁡ 5 ) ≈ 0.097455. {\displaystyle {\begin{aligned}D_{\text{KL}}(Q\parallel P)&=\sum _{x\in {\mathcal {X}}}Q(x)\,\ln {\frac {Q(x)}{P(x)}}\\&={\frac {1}{3}}\,\ln {\frac {1/3}{9/25}}+{\frac {1}{3}}\,\ln {\frac {1/3}{12/25}}+{\frac {1}{3}}\,\ln {\frac {1/3}{4/25}}\\&={\frac {1}{3}}\left(-4\ln 2-6\ln 3+6\ln 5\right)\\&\approx 0.097455.\end{aligned}}} In the field of statistics, the Neyman–Pearson lemma states that the most powerful way to distinguish between the two distributions P and Q based on an observation Y (drawn from one of them) is through the log of the ratio of their likelihoods: log ⁡ P ( Y ) − log ⁡ Q ( Y ) {\displaystyle \log P(Y)-\log Q(Y)} . The KL divergence is the expected value of this statistic if Y is actually drawn from P . Kullback motivated the statistic as an expected log likelihood ratio. [ 15 ] In the context of coding theory , D KL ( P ∥ Q ) {\displaystyle D_{\text{KL}}(P\parallel Q)} can be constructed by measuring the expected number of extra bits required to code samples from P using a code optimized for Q rather than the code optimized for P . In the context of machine learning , D KL ( P ∥ Q ) {\displaystyle D_{\text{KL}}(P\parallel Q)} is often called the information gain achieved if P would be used instead of Q which is currently used. By analogy with information theory, it is called the relative entropy of P with respect to Q . Expressed in the language of Bayesian inference , D KL ( P ∥ Q ) {\displaystyle D_{\text{KL}}(P\parallel Q)} is a measure of the information gained by revising one's beliefs from the prior probability distribution Q to the posterior probability distribution P . In other words, it is the amount of information lost when Q is used to approximate P . [ 16 ] In applications, P typically represents the "true" distribution of data, observations, or a precisely calculated theoretical distribution, while Q typically represents a theory, model, description, or approximation of P . In order to find a distribution Q that is closest to P , we can minimize the KL divergence and compute an information projection . While it is a statistical distance , it is not a metric , the most familiar type of distance, but instead it is a divergence . [ 4 ] While metrics are symmetric and generalize linear distance, satisfying the triangle inequality , divergences are asymmetric and generalize squared distance, in some cases satisfying a generalized Pythagorean theorem . In general D KL ( P ∥ Q ) {\displaystyle D_{\text{KL}}(P\parallel Q)} does not equal D KL ( Q ∥ P ) {\displaystyle D_{\text{KL}}(Q\parallel P)} , and the asymmetry is an important part of the geometry. [ 4 ] The infinitesimal form of relative entropy, specifically its Hessian , gives a metric tensor that equals the Fisher information metric ; see § Fisher information metric . Fisher information metric on the certain probability distribution let determine the natural gradient for information-geometric optimization algorithms. [ 17 ] Its quantum version is Fubini-study metric. [ 18 ] Relative entropy satisfies a generalized Pythagorean theorem for exponential families (geometrically interpreted as dually flat manifolds ), and this allows one to minimize relative entropy by geometric means, for example by information projection and in maximum likelihood estimation . [ 5 ] The relative entropy is the Bregman divergence generated by the negative entropy, but it is also of the form of an f -divergence . For probabilities over a finite alphabet , it is unique in being a member of both of these classes of statistical divergences . The application of Bregman divergence can be found in mirror descent. [ 19 ] Consider a growth-optimizing investor in a fair game with mutually exclusive outcomes (e.g. a “horse race” in which the official odds add up to one). The rate of return expected by such an investor is equal to the relative entropy between the investor's believed probabilities and the official odds. [ 20 ] This is a special case of a much more general connection between financial returns and divergence measures. [ 21 ] Financial risks are connected to D KL {\displaystyle D_{\text{KL}}} via information geometry. [ 22 ] Investors' views, the prevailing market view, and risky scenarios form triangles on the relevant manifold of probability distributions. The shape of the triangles determines key financial risks (both qualitatively and quantitatively). For instance, obtuse triangles in which investors' views and risk scenarios appear on “opposite sides” relative to the market describe negative risks, acute triangles describe positive exposure, and the right-angled situation in the middle corresponds to zero risk. Extending this concept, relative entropy can be hypothetically utilised to identify the behaviour of informed investors, if one takes this to be represented by the magnitude and deviations away from the prior expectations of fund flows, for example. [ 23 ] In information theory, the Kraft–McMillan theorem establishes that any directly decodable coding scheme for coding a message to identify one value x i {\displaystyle x_{i}} out of a set of possibilities X can be seen as representing an implicit probability distribution q ( x i ) = 2 − ℓ i {\displaystyle q(x_{i})=2^{-\ell _{i}}} over X , where ℓ i {\displaystyle \ell _{i}} is the length of the code for x i {\displaystyle x_{i}} in bits. Therefore, relative entropy can be interpreted as the expected extra message-length per datum that must be communicated if a code that is optimal for a given (wrong) distribution Q is used, compared to using a code based on the true distribution P : it is the excess entropy. D KL ( P ∥ Q ) = ∑ x ∈ X p ( x ) log ⁡ 1 q ( x ) − ∑ x ∈ X p ( x ) log ⁡ 1 p ( x ) = H ( P , Q ) − H ( P ) {\displaystyle {\begin{aligned}D_{\text{KL}}(P\parallel Q)&=\sum _{x\in {\mathcal {X}}}p(x)\log {\frac {1}{q(x)}}-\sum _{x\in {\mathcal {X}}}p(x)\log {\frac {1}{p(x)}}\\[5pt]&=\mathrm {H} (P,Q)-\mathrm {H} (P)\end{aligned}}} where H ( P , Q ) {\displaystyle \mathrm {H} (P,Q)} is the cross entropy of Q relative to P and H ( P ) {\displaystyle \mathrm {H} (P)} is the entropy of P (which is the same as the cross-entropy of P with itself). The relative entropy D KL ( P ∥ Q ) {\displaystyle D_{\text{KL}}(P\parallel Q)} can be thought of geometrically as a statistical distance , a measure of how far the distribution Q is from the distribution P . Geometrically it is a divergence : an asymmetric, generalized form of squared distance. The cross-entropy H ( P , Q ) {\displaystyle H(P,Q)} is itself such a measurement (formally a loss function ), but it cannot be thought of as a distance, since H ( P , P ) =: H ( P ) {\displaystyle H(P,P)=:H(P)} is not zero. This can be fixed by subtracting H ( P ) {\displaystyle H(P)} to make D KL ( P ∥ Q ) {\displaystyle D_{\text{KL}}(P\parallel Q)} agree more closely with our notion of distance, as the excess loss. The resulting function is asymmetric, and while this can be symmetrized (see § Symmetrised divergence ), the asymmetric form is more useful. See § Interpretations for more on the geometric interpretation. Relative entropy relates to " rate function " in the theory of large deviations . [ 24 ] [ 25 ] Arthur Hobson proved that relative entropy is the only measure of difference between probability distributions that satisfies some desired properties, which are the canonical extension to those appearing in a commonly used characterization of entropy . [ 26 ] Consequently, mutual information is the only measure of mutual dependence that obeys certain related conditions, since it can be defined in terms of Kullback–Leibler divergence . In particular, if P ( d x ) = p ( x ) μ ( d x ) {\displaystyle P(dx)=p(x)\mu (dx)} and Q ( d x ) = q ( x ) μ ( d x ) {\displaystyle Q(dx)=q(x)\mu (dx)} , then p ( x ) = q ( x ) {\displaystyle p(x)=q(x)} μ {\displaystyle \mu } - almost everywhere . The entropy H ( P ) {\displaystyle \mathrm {H} (P)} thus sets a minimum value for the cross-entropy H ( P , Q ) {\displaystyle \mathrm {H} (P,Q)} , the expected number of bits required when using a code based on Q rather than P ; and the Kullback–Leibler divergence therefore represents the expected number of extra bits that must be transmitted to identify a value x drawn from X , if a code is used corresponding to the probability distribution Q , rather than the "true" distribution P . Denote f ( α ) := D KL ( ( 1 − α ) Q + α P ∥ Q ) {\displaystyle f(\alpha ):=D_{\text{KL}}((1-\alpha )Q+\alpha P\parallel Q)} and note that D KL ( P ∥ Q ) = f ( 1 ) {\displaystyle D_{\text{KL}}(P\parallel Q)=f(1)} . The first derivative of f {\displaystyle f} may be derived and evaluated as follows f ′ ( α ) = ∑ x ∈ X ( P ( x ) − Q ( x ) ) ( log ⁡ ( ( 1 − α ) Q ( x ) + α P ( x ) Q ( x ) ) + 1 ) = ∑ x ∈ X ( P ( x ) − Q ( x ) ) log ⁡ ( ( 1 − α ) Q ( x ) + α P ( x ) Q ( x ) ) f ′ ( 0 ) = 0 {\displaystyle {\begin{aligned}f'(\alpha )&=\sum _{x\in {\mathcal {X}}}(P(x)-Q(x))\left(\log \left({\frac {(1-\alpha )Q(x)+\alpha P(x)}{Q(x)}}\right)+1\right)\\&=\sum _{x\in {\mathcal {X}}}(P(x)-Q(x))\log \left({\frac {(1-\alpha )Q(x)+\alpha P(x)}{Q(x)}}\right)\\f'(0)&=0\end{aligned}}} Further derivatives may be derived and evaluated as follows f ″ ( α ) = ∑ x ∈ X ( P ( x ) − Q ( x ) ) 2 ( 1 − α ) Q ( x ) + α P ( x ) f ″ ( 0 ) = ∑ x ∈ X ( P ( x ) − Q ( x ) ) 2 Q ( x ) f ( n ) ( α ) = ( − 1 ) n ( n − 2 ) ! ∑ x ∈ X ( P ( x ) − Q ( x ) ) n ( ( 1 − α ) Q ( x ) + α P ( x ) ) n − 1 f ( n ) ( 0 ) = ( − 1 ) n ( n − 2 ) ! ∑ x ∈ X ( P ( x ) − Q ( x ) ) n Q ( x ) n − 1 {\displaystyle {\begin{aligned}f''(\alpha )&=\sum _{x\in {\mathcal {X}}}{\frac {(P(x)-Q(x))^{2}}{(1-\alpha )Q(x)+\alpha P(x)}}\\f''(0)&=\sum _{x\in {\mathcal {X}}}{\frac {(P(x)-Q(x))^{2}}{Q(x)}}\\f^{(n)}(\alpha )&=(-1)^{n}(n-2)!\sum _{x\in {\mathcal {X}}}{\frac {(P(x)-Q(x))^{n}}{\left((1-\alpha )Q(x)+\alpha P(x)\right)^{n-1}}}\\f^{(n)}(0)&=(-1)^{n}(n-2)!\sum _{x\in {\mathcal {X}}}{\frac {(P(x)-Q(x))^{n}}{Q(x)^{n-1}}}\end{aligned}}} Hence solving for D KL ( P ∥ Q ) {\displaystyle D_{\text{KL}}(P\parallel Q)} via the Taylor expansion of f {\displaystyle f} about 0 {\displaystyle 0} evaluated at α = 1 {\displaystyle \alpha =1} yields D KL ( P ∥ Q ) = ∑ n = 0 ∞ f ( n ) ( 0 ) n ! = ∑ n = 2 ∞ 1 n ( n − 1 ) ∑ x ∈ X ( Q ( x ) − P ( x ) ) n Q ( x ) n − 1 {\displaystyle {\begin{aligned}D_{\text{KL}}(P\parallel Q)&=\sum _{n=0}^{\infty }{\frac {f^{(n)}(0)}{n!}}\\&=\sum _{n=2}^{\infty }{\frac {1}{n(n-1)}}\sum _{x\in {\mathcal {X}}}{\frac {(Q(x)-P(x))^{n}}{Q(x)^{n-1}}}\end{aligned}}} P ≤ 2 Q {\displaystyle P\leq 2Q} a.s. is a sufficient condition for convergence of the series by the following absolute convergence argument ∑ n = 2 ∞ | 1 n ( n − 1 ) ∑ x ∈ X ( Q ( x ) − P ( x ) ) n Q ( x ) n − 1 | = ∑ n = 2 ∞ 1 n ( n − 1 ) ∑ x ∈ X | Q ( x ) − P ( x ) | | 1 − P ( x ) Q ( x ) | n − 1 ≤ ∑ n = 2 ∞ 1 n ( n − 1 ) ∑ x ∈ X | Q ( x ) − P ( x ) | ≤ ∑ n = 2 ∞ 1 n ( n − 1 ) = 1 {\displaystyle {\begin{aligned}\sum _{n=2}^{\infty }\left\vert {\frac {1}{n(n-1)}}\sum _{x\in {\mathcal {X}}}{\frac {(Q(x)-P(x))^{n}}{Q(x)^{n-1}}}\right\vert &=\sum _{n=2}^{\infty }{\frac {1}{n(n-1)}}\sum _{x\in {\mathcal {X}}}\left\vert Q(x)-P(x)\right\vert \left\vert 1-{\frac {P(x)}{Q(x)}}\right\vert ^{n-1}\\&\leq \sum _{n=2}^{\infty }{\frac {1}{n(n-1)}}\sum _{x\in {\mathcal {X}}}\left\vert Q(x)-P(x)\right\vert \\&\leq \sum _{n=2}^{\infty }{\frac {1}{n(n-1)}}\\&=1\end{aligned}}} P ≤ 2 Q {\displaystyle P\leq 2Q} a.s. is also a necessary condition for convergence of the series by the following proof by contradiction. Assume that P > 2 Q {\displaystyle P>2Q} with measure strictly greater than 0 {\displaystyle 0} . It then follows that there must exist some values ε > 0 {\displaystyle \varepsilon >0} , ρ > 0 {\displaystyle \rho >0} , and U < ∞ {\displaystyle U<\infty } such that P ≥ 2 Q + ε {\displaystyle P\geq 2Q+\varepsilon } and Q ≤ U {\displaystyle Q\leq U} with measure ρ {\displaystyle \rho } . The previous proof of sufficiency demonstrated that the measure 1 − ρ {\displaystyle 1-\rho } component of the series where P ≤ 2 Q {\displaystyle P\leq 2Q} is bounded, so we need only concern ourselves with the behavior of the measure ρ {\displaystyle \rho } component of the series where P ≥ 2 Q + ε {\displaystyle P\geq 2Q+\varepsilon } . The absolute value of the n {\displaystyle n} th term of this component of the series is then lower bounded by 1 n ( n − 1 ) ρ ( 1 + ε U ) n {\displaystyle {\frac {1}{n(n-1)}}\rho \left(1+{\frac {\varepsilon }{U}}\right)^{n}} , which is unbounded as n → ∞ {\displaystyle n\to \infty } , so the series diverges. The following result, due to Donsker and Varadhan, [ 29 ] is known as Donsker and Varadhan's variational formula . Theorem [Duality Formula for Variational Inference] — Let Θ {\displaystyle \Theta } be a set endowed with an appropriate σ {\displaystyle \sigma } -field F {\displaystyle {\mathcal {F}}} , and two probability measures P and Q , which formulate two probability spaces ( Θ , F , P ) {\displaystyle (\Theta ,{\mathcal {F}},P)} and ( Θ , F , Q ) {\displaystyle (\Theta ,{\mathcal {F}},Q)} , with Q ≪ P {\displaystyle Q\ll P} . ( Q ≪ P {\displaystyle Q\ll P} indicates that Q is absolutely continuous with respect to P .) Let h be a real-valued integrable random variable on ( Θ , F , P ) {\displaystyle (\Theta ,{\mathcal {F}},P)} . Then the following equality holds log ⁡ E P [ exp ⁡ h ] = sup Q ≪ P ⁡ { E Q [ h ] − D KL ( Q ∥ P ) } . {\displaystyle \log E_{P}[\exp h]=\operatorname {sup} _{Q\ll P}\{E_{Q}[h]-D_{\text{KL}}(Q\parallel P)\}.} Further, the supremum on the right-hand side is attained if and only if it holds Q ( d θ ) P ( d θ ) = exp ⁡ h ( θ ) E P [ exp ⁡ h ] , {\displaystyle {\frac {Q(d\theta )}{P(d\theta )}}={\frac {\exp h(\theta )}{E_{P}[\exp h]}},} almost surely with respect to probability measure P , where Q ( d θ ) P ( d θ ) {\displaystyle {\frac {Q(d\theta )}{P(d\theta )}}} denotes the Radon-Nikodym derivative of Q with respect to P . For a short proof assuming integrability of exp ⁡ ( h ) {\displaystyle \exp(h)} with respect to P , let Q ∗ {\displaystyle Q^{*}} have P -density exp ⁡ h ( θ ) E P [ exp ⁡ h ] {\displaystyle {\frac {\exp h(\theta )}{E_{P}[\exp h]}}} , i.e. Q ∗ ( d θ ) = exp ⁡ h ( θ ) E P [ exp ⁡ h ] P ( d θ ) {\displaystyle Q^{*}(d\theta )={\frac {\exp h(\theta )}{E_{P}[\exp h]}}P(d\theta )} Then D KL ( Q ∥ Q ∗ ) − D KL ( Q ∥ P ) = − E Q [ h ] + log ⁡ E P [ exp ⁡ h ] . {\displaystyle D_{\text{KL}}(Q\parallel Q^{*})-D_{\text{KL}}(Q\parallel P)=-E_{Q}[h]+\log E_{P}[\exp h].} Therefore, E Q [ h ] − D KL ( Q ∥ P ) = log ⁡ E P [ exp ⁡ h ] − D KL ( Q ∥ Q ∗ ) ≤ log ⁡ E P [ exp ⁡ h ] , {\displaystyle E_{Q}[h]-D_{\text{KL}}(Q\parallel P)=\log E_{P}[\exp h]-D_{\text{KL}}(Q\parallel Q^{*})\leq \log E_{P}[\exp h],} where the last inequality follows from D KL ( Q ∥ Q ∗ ) ≥ 0 {\displaystyle D_{\text{KL}}(Q\parallel Q^{*})\geq 0} , for which equality occurs if and only if Q = Q ∗ {\displaystyle Q=Q^{*}} . The conclusion follows. Suppose that we have two multivariate normal distributions , with means μ 0 , μ 1 {\displaystyle \mu _{0},\mu _{1}} and with (non-singular) covariance matrices Σ 0 , Σ 1 . {\displaystyle \Sigma _{0},\Sigma _{1}.} If the two distributions have the same dimension, k , then the relative entropy between the distributions is as follows: [ 30 ] D KL ( N 0 ∥ N 1 ) = 1 2 [ tr ⁡ ( Σ 1 − 1 Σ 0 ) − k + ( μ 1 − μ 0 ) T Σ 1 − 1 ( μ 1 − μ 0 ) + ln ⁡ det Σ 1 det Σ 0 ] . {\displaystyle D_{\text{KL}}\left({\mathcal {N}}_{0}\parallel {\mathcal {N}}_{1}\right)={\frac {1}{2}}\left[\operatorname {tr} \left(\Sigma _{1}^{-1}\Sigma _{0}\right)-k+\left(\mu _{1}-\mu _{0}\right)^{\mathsf {T}}\Sigma _{1}^{-1}\left(\mu _{1}-\mu _{0}\right)+\ln {\frac {\det \Sigma _{1}}{\det \Sigma _{0}}}\right].} The logarithm in the last term must be taken to base e since all terms apart from the last are base- e logarithms of expressions that are either factors of the density function or otherwise arise naturally. The equation therefore gives a result measured in nats . Dividing the entire expression above by ln ⁡ ( 2 ) {\displaystyle \ln(2)} yields the divergence in bits . In a numerical implementation, it is helpful to express the result in terms of the Cholesky decompositions L 0 , L 1 {\displaystyle L_{0},L_{1}} such that Σ 0 = L 0 L 0 T {\displaystyle \Sigma _{0}=L_{0}L_{0}^{T}} and Σ 1 = L 1 L 1 T {\displaystyle \Sigma _{1}=L_{1}L_{1}^{T}} . Then with M and y solutions to the triangular linear systems L 1 M = L 0 {\displaystyle L_{1}M=L_{0}} , and L 1 y = μ 1 − μ 0 {\displaystyle L_{1}y=\mu _{1}-\mu _{0}} , D KL ( N 0 ∥ N 1 ) = 1 2 ( ∑ i , j = 1 k ( M i j ) 2 − k + | y | 2 + 2 ∑ i = 1 k ln ⁡ ( L 1 ) i i ( L 0 ) i i ) . {\displaystyle D_{\text{KL}}\left({\mathcal {N}}_{0}\parallel {\mathcal {N}}_{1}\right)={\frac {1}{2}}\left(\sum _{i,j=1}^{k}{\left(M_{ij}\right)}^{2}-k+|y|^{2}+2\sum _{i=1}^{k}\ln {\frac {(L_{1})_{ii}}{(L_{0})_{ii}}}\right).} A special case, and a common quantity in variational inference , is the relative entropy between a diagonal multivariate normal, and a standard normal distribution (with zero mean and unit variance): D KL ( N ( ( μ 1 , … , μ k ) T , diag ⁡ ( σ 1 2 , … , σ k 2 ) ) ∥ N ( 0 , I ) ) = 1 2 ∑ i = 1 k [ σ i 2 + μ i 2 − 1 − ln ⁡ ( σ i 2 ) ] . {\displaystyle D_{\text{KL}}\left({\mathcal {N}}\left(\left(\mu _{1},\ldots ,\mu _{k}\right)^{\mathsf {T}},\operatorname {diag} \left(\sigma _{1}^{2},\ldots ,\sigma _{k}^{2}\right)\right)\parallel {\mathcal {N}}\left(\mathbf {0} ,\mathbf {I} \right)\right)={\frac {1}{2}}\sum _{i=1}^{k}\left[\sigma _{i}^{2}+\mu _{i}^{2}-1-\ln \left(\sigma _{i}^{2}\right)\right].} For two univariate normal distributions p and q the above simplifies to [ 31 ] D KL ( p ∥ q ) = log ⁡ σ 1 σ 0 + σ 0 2 + ( μ 0 − μ 1 ) 2 2 σ 1 2 − 1 2 {\displaystyle D_{\text{KL}}\left({\mathcal {p}}\parallel {\mathcal {q}}\right)=\log {\frac {\sigma _{1}}{\sigma _{0}}}+{\frac {\sigma _{0}^{2}+{\left(\mu _{0}-\mu _{1}\right)}^{2}}{2\sigma _{1}^{2}}}-{\frac {1}{2}}} In the case of co-centered normal distributions with k = σ 1 / σ 0 {\displaystyle k=\sigma _{1}/\sigma _{0}} , this simplifies [ 32 ] to: D KL ( p ∥ q ) = log 2 ⁡ k + ( k − 2 − 1 ) / 2 / ln ⁡ ( 2 ) b i t s {\displaystyle D_{\text{KL}}\left({\mathcal {p}}\parallel {\mathcal {q}}\right)=\log _{2}k+(k^{-2}-1)/2/\ln(2)\mathrm {bits} } Consider two uniform distributions, with the support of p = [ A , B ] {\displaystyle p=[A,B]} enclosed within q = [ C , D ] {\displaystyle q=[C,D]} ( C ≤ A < B ≤ D {\displaystyle C\leq A<B\leq D} ). Then the information gain is: D KL ( p ∥ q ) = log ⁡ D − C B − A {\displaystyle D_{\text{KL}}\left({\mathcal {p}}\parallel {\mathcal {q}}\right)=\log {\frac {D-C}{B-A}}} Intuitively, [ 32 ] the information gain to a k times narrower uniform distribution contains log 2 ⁡ k {\displaystyle \log _{2}k} bits. This connects with the use of bits in computing, where log 2 ⁡ k {\displaystyle \log _{2}k} bits would be needed to identify one element of a k long stream. The exponential family of distribution is given by p X ( x | θ ) = h ( x ) exp ⁡ ( θ T T ( x ) − A ( θ ) ) {\displaystyle p_{X}(x|\theta )=h(x)\exp \left(\theta ^{\mathsf {T}}T(x)-A(\theta )\right)} where h ( x ) {\displaystyle h(x)} is reference measure, T ( x ) {\displaystyle T(x)} is sufficient statistics, θ {\displaystyle \theta } is canonical natural parameters, and A ( θ ) {\displaystyle A(\theta )} is the log-partition function. The KL divergence between two distributions p ( x | θ 1 ) {\displaystyle p(x|\theta _{1})} and p ( x | θ 2 ) {\displaystyle p(x|\theta _{2})} is given by [ 33 ] D KL ( θ 1 ∥ θ 2 ) = ( θ 1 − θ 2 ) T μ 1 − A ( θ 1 ) + A ( θ 2 ) {\displaystyle D_{\text{KL}}(\theta _{1}\parallel \theta _{2})={\left(\theta _{1}-\theta _{2}\right)}^{\mathsf {T}}\mu _{1}-A(\theta _{1})+A(\theta _{2})} where μ 1 = E θ 1 [ T ( X ) ] = ∇ A ( θ 1 ) {\displaystyle \mu _{1}=E_{\theta _{1}}[T(X)]=\nabla A(\theta _{1})} is the mean parameter of p ( x | θ 1 ) {\displaystyle p(x|\theta _{1})} . For example, for the Poisson distribution with mean λ {\displaystyle \lambda } , the sufficient statistics T ( x ) = x {\displaystyle T(x)=x} , the natural parameter θ = log ⁡ λ {\displaystyle \theta =\log \lambda } , and log partition function A ( θ ) = e θ {\displaystyle A(\theta )=e^{\theta }} . As such, the divergence between two Poisson distributions with means λ 1 {\displaystyle \lambda _{1}} and λ 2 {\displaystyle \lambda _{2}} is D KL ( λ 1 ∥ λ 2 ) = λ 1 log ⁡ λ 1 λ 2 − λ 1 + λ 2 . {\displaystyle D_{\text{KL}}(\lambda _{1}\parallel \lambda _{2})=\lambda _{1}\log {\frac {\lambda _{1}}{\lambda _{2}}}-\lambda _{1}+\lambda _{2}.} As another example, for a normal distribution with unit variance N ( μ , 1 ) {\displaystyle N(\mu ,1)} , the sufficient statistics T ( x ) = x {\displaystyle T(x)=x} , the natural parameter θ = μ {\displaystyle \theta =\mu } , and log partition function A ( θ ) = μ 2 / 2 {\displaystyle A(\theta )=\mu ^{2}/2} . Thus, the divergence between two normal distributions N ( μ 1 , 1 ) {\displaystyle N(\mu _{1},1)} and N ( μ 2 , 1 ) {\displaystyle N(\mu _{2},1)} is D KL ( μ 1 ∥ μ 2 ) = ( μ 1 − μ 2 ) μ 1 − μ 1 2 2 + μ 2 2 2 = ( μ 2 − μ 1 ) 2 2 . {\displaystyle D_{\text{KL}}(\mu _{1}\parallel \mu _{2})=\left(\mu _{1}-\mu _{2}\right)\mu _{1}-{\frac {\mu _{1}^{2}}{2}}+{\frac {\mu _{2}^{2}}{2}}={\frac {{\left(\mu _{2}-\mu _{1}\right)}^{2}}{2}}.} As final example, the divergence between a normal distribution with unit variance N ( μ , 1 ) {\displaystyle N(\mu ,1)} and a Poisson distribution with mean λ {\displaystyle \lambda } is D KL ( μ ∥ λ ) = ( μ − log ⁡ λ ) μ − μ 2 2 + λ . {\displaystyle D_{\text{KL}}(\mu \parallel \lambda )=(\mu -\log \lambda )\mu -{\frac {\mu ^{2}}{2}}+\lambda .} While relative entropy is a statistical distance , it is not a metric on the space of probability distributions, but instead it is a divergence . [ 4 ] While metrics are symmetric and generalize linear distance, satisfying the triangle inequality , divergences are asymmetric in general and generalize squared distance, in some cases satisfying a generalized Pythagorean theorem . In general D KL ( P ∥ Q ) {\displaystyle D_{\text{KL}}(P\parallel Q)} does not equal D KL ( Q ∥ P ) {\displaystyle D_{\text{KL}}(Q\parallel P)} , and while this can be symmetrized (see § Symmetrised divergence ), the asymmetry is an important part of the geometry. [ 4 ] It generates a topology on the space of probability distributions . More concretely, if { P 1 , P 2 , … } {\displaystyle \{P_{1},P_{2},\ldots \}} is a sequence of distributions such that lim n → ∞ D KL ( P n ∥ Q ) = 0 , {\displaystyle \lim _{n\to \infty }D_{\text{KL}}(P_{n}\parallel Q)=0,} then it is said that P n → D Q . {\displaystyle P_{n}\xrightarrow {D} \,Q.} Pinsker's inequality entails that P n → D P ⇒ P n → T V P , {\displaystyle P_{n}\xrightarrow {D} P\Rightarrow P_{n}\xrightarrow {TV} P,} where the latter stands for the usual convergence in total variation . Relative entropy is directly related to the Fisher information metric . This can be made explicit as follows. Assume that the probability distributions P and Q are both parameterized by some (possibly multi-dimensional) parameter θ {\displaystyle \theta } . Consider then two close by values of P = P ( θ ) {\displaystyle P=P(\theta )} and Q = P ( θ 0 ) {\displaystyle Q=P(\theta _{0})} so that the parameter θ {\displaystyle \theta } differs by only a small amount from the parameter value θ 0 {\displaystyle \theta _{0}} . Specifically, up to first order one has (using the Einstein summation convention ) P ( θ ) = P ( θ 0 ) + Δ θ j P j ( θ 0 ) + ⋯ {\displaystyle P(\theta )=P(\theta _{0})+\Delta \theta _{j}\,P_{j}(\theta _{0})+\cdots } with Δ θ j = ( θ − θ 0 ) j {\displaystyle \Delta \theta _{j}=(\theta -\theta _{0})_{j}} a small change of θ {\displaystyle \theta } in the j direction, and P j ( θ 0 ) = ∂ P ∂ θ j ( θ 0 ) {\displaystyle P_{j}\left(\theta _{0}\right)={\frac {\partial P}{\partial \theta _{j}}}(\theta _{0})} the corresponding rate of change in the probability distribution. Since relative entropy has an absolute minimum 0 for P = Q {\displaystyle P=Q} , i.e. θ = θ 0 {\displaystyle \theta =\theta _{0}} , it changes only to second order in the small parameters Δ θ j {\displaystyle \Delta \theta _{j}} . More formally, as for any minimum, the first derivatives of the divergence vanish ∂ ∂ θ j | θ = θ 0 D KL ( P ( θ ) ∥ P ( θ 0 ) ) = 0 , {\displaystyle \left.{\frac {\partial }{\partial \theta _{j}}}\right|_{\theta =\theta _{0}}D_{\text{KL}}(P(\theta )\parallel P(\theta _{0}))=0,} and by the Taylor expansion one has up to second order D KL ( P ( θ ) ∥ P ( θ 0 ) ) = 1 2 Δ θ j Δ θ k g j k ( θ 0 ) + ⋯ {\displaystyle D_{\text{KL}}(P(\theta )\parallel P(\theta _{0}))={\frac {1}{2}}\,\Delta \theta _{j}\,\Delta \theta _{k}\,g_{jk}(\theta _{0})+\cdots } where the Hessian matrix of the divergence g j k ( θ 0 ) = ∂ 2 ∂ θ j ∂ θ k | θ = θ 0 D KL ( P ( θ ) ∥ P ( θ 0 ) ) {\displaystyle g_{jk}(\theta _{0})=\left.{\frac {\partial ^{2}}{\partial \theta _{j}\,\partial \theta _{k}}}\right|_{\theta =\theta _{0}}D_{\text{KL}}(P(\theta )\parallel P(\theta _{0}))} must be positive semidefinite . Letting θ 0 {\displaystyle \theta _{0}} vary (and dropping the subindex 0) the Hessian g j k ( θ ) {\displaystyle g_{jk}(\theta )} defines a (possibly degenerate) Riemannian metric on the θ parameter space, called the Fisher information metric. When p ( x , ρ ) {\displaystyle p_{(x,\rho )}} satisfies the following regularity conditions: ∂ log ⁡ ( p ) ∂ ρ , ∂ 2 log ⁡ ( p ) ∂ ρ 2 , ∂ 3 log ⁡ ( p ) ∂ ρ 3 {\displaystyle {\frac {\partial \log(p)}{\partial \rho }},{\frac {\partial ^{2}\log(p)}{\partial \rho ^{2}}},{\frac {\partial ^{3}\log(p)}{\partial \rho ^{3}}}} exist, | ∂ p ∂ ρ | < F ( x ) : ∫ x = 0 ∞ F ( x ) d x < ∞ , | ∂ 2 p ∂ ρ 2 | < G ( x ) : ∫ x = 0 ∞ G ( x ) d x < ∞ | ∂ 3 log ⁡ ( p ) ∂ ρ 3 | < H ( x ) : ∫ x = 0 ∞ p ( x , 0 ) H ( x ) d x < ξ < ∞ {\displaystyle {\begin{aligned}\left|{\frac {\partial p}{\partial \rho }}\right|&<F(x):\int _{x=0}^{\infty }F(x)\,dx<\infty ,\\\left|{\frac {\partial ^{2}p}{\partial \rho ^{2}}}\right|&<G(x):\int _{x=0}^{\infty }G(x)\,dx<\infty \\\left|{\frac {\partial ^{3}\log(p)}{\partial \rho ^{3}}}\right|&<H(x):\int _{x=0}^{\infty }p(x,0)H(x)\,dx<\xi <\infty \end{aligned}}} where ξ is independent of ρ ∫ x = 0 ∞ ∂ p ( x , ρ ) ∂ ρ | ρ = 0 d x = ∫ x = 0 ∞ ∂ 2 p ( x , ρ ) ∂ ρ 2 | ρ = 0 d x = 0 {\displaystyle \left.\int _{x=0}^{\infty }{\frac {\partial p(x,\rho )}{\partial \rho }}\right|_{\rho =0}\,dx=\left.\int _{x=0}^{\infty }{\frac {\partial ^{2}p(x,\rho )}{\partial \rho ^{2}}}\right|_{\rho =0}\,dx=0} then: D ( p ( x , 0 ) ∥ p ( x , ρ ) ) = c ρ 2 2 + O ( ρ 3 ) as ρ → 0. {\displaystyle {\mathcal {D}}(p(x,0)\parallel p(x,\rho ))={\frac {c\rho ^{2}}{2}}+{\mathcal {O}}\left(\rho ^{3}\right){\text{ as }}\rho \to 0.} Another information-theoretic metric is variation of information , which is roughly a symmetrization of conditional entropy . It is a metric on the set of partitions of a discrete probability space . MAUVE is a measure of the statistical gap between two text distributions, such as the difference between text generated by a model and human-written text. This measure is computed using Kullback–Leibler divergences between the two distributions in a quantized embedding space of a foundation model. Many of the other quantities of information theory can be interpreted as applications of relative entropy to specific cases. The self-information , also known as the information content of a signal, random variable, or event is defined as the negative logarithm of the probability of the given outcome occurring. When applied to a discrete random variable , the self-information can be represented as [ citation needed ] I ⁡ ( m ) = D KL ( δ im ∥ { p i } ) , {\displaystyle \operatorname {\operatorname {I} } (m)=D_{\text{KL}}\left(\delta _{\text{im}}\parallel \{p_{i}\}\right),} is the relative entropy of the probability distribution P ( i ) {\displaystyle P(i)} from a Kronecker delta representing certainty that i = m {\displaystyle i=m} — i.e. the number of extra bits that must be transmitted to identify i if only the probability distribution P ( i ) {\displaystyle P(i)} is available to the receiver, not the fact that i = m {\displaystyle i=m} . The mutual information , I ⁡ ( X ; Y ) = D KL ( P ( X , Y ) ∥ P ( X ) P ( Y ) ) = E X ⁡ { D KL ( P ( Y ∣ X ) ∥ P ( Y ) ) } = E Y ⁡ { D KL ( P ( X ∣ Y ) ∥ P ( X ) ) } {\displaystyle {\begin{aligned}\operatorname {I} (X;Y)&=D_{\text{KL}}(P(X,Y)\parallel P(X)P(Y))\\[5pt]&=\operatorname {E} _{X}\{D_{\text{KL}}(P(Y\mid X)\parallel P(Y))\}\\[5pt]&=\operatorname {E} _{Y}\{D_{\text{KL}}(P(X\mid Y)\parallel P(X))\}\end{aligned}}} is the relative entropy of the joint probability distribution P ( X , Y ) {\displaystyle P(X,Y)} from the product P ( X ) P ( Y ) {\displaystyle P(X)P(Y)} of the two marginal probability distributions — i.e. the expected number of extra bits that must be transmitted to identify X and Y if they are coded using only their marginal distributions instead of the joint distribution. Equivalently, if the joint probability P ( X , Y ) {\displaystyle P(X,Y)} is known, it is the expected number of extra bits that must on average be sent to identify Y if the value of X is not already known to the receiver. The Shannon entropy , H ( X ) = E ⁡ [ I X ⁡ ( x ) ] = log ⁡ N − D KL ( p X ( x ) ∥ P U ( X ) ) {\displaystyle {\begin{aligned}\mathrm {H} (X)&=\operatorname {E} \left[\operatorname {I} _{X}(x)\right]\\&=\log N-D_{\text{KL}}{\left(p_{X}(x)\parallel P_{U}(X)\right)}\end{aligned}}} is the number of bits which would have to be transmitted to identify X from N equally likely possibilities, less the relative entropy of the uniform distribution on the random variates of X , P U ( X ) {\displaystyle P_{U}(X)} , from the true distribution P ( X ) {\displaystyle P(X)} — i.e. less the expected number of bits saved, which would have had to be sent if the value of X were coded according to the uniform distribution P U ( X ) {\displaystyle P_{U}(X)} rather than the true distribution P ( X ) {\displaystyle P(X)} . This definition of Shannon entropy forms the basis of E.T. Jaynes 's alternative generalization to continuous distributions, the limiting density of discrete points (as opposed to the usual differential entropy ), which defines the continuous entropy as lim N → ∞ H N ( X ) = log ⁡ N − ∫ p ( x ) log ⁡ p ( x ) m ( x ) d x , {\displaystyle \lim _{N\to \infty }H_{N}(X)=\log N-\int p(x)\log {\frac {p(x)}{m(x)}}\,dx,} which is equivalent to: log ⁡ ( N ) − D KL ( p ( x ) | | m ( x ) ) {\displaystyle \log(N)-D_{\text{KL}}(p(x)||m(x))} The conditional entropy [ 34 ] , H ( X ∣ Y ) = log ⁡ N − D KL ( P ( X , Y ) ∥ P U ( X ) P ( Y ) ) = log ⁡ N − D KL ( P ( X , Y ) ∥ P ( X ) P ( Y ) ) − D KL ( P ( X ) ∥ P U ( X ) ) = H ( X ) − I ⁡ ( X ; Y ) = log ⁡ N − E Y ⁡ [ D KL ( P ( X ∣ Y ) ∥ P U ( X ) ) ] {\displaystyle {\begin{aligned}\mathrm {H} (X\mid Y)&=\log N-D_{\text{KL}}(P(X,Y)\parallel P_{U}(X)P(Y))\\[5pt]&=\log N-D_{\text{KL}}(P(X,Y)\parallel P(X)P(Y))-D_{\text{KL}}(P(X)\parallel P_{U}(X))\\[5pt]&=\mathrm {H} (X)-\operatorname {I} (X;Y)\\[5pt]&=\log N-\operatorname {E} _{Y}\left[D_{\text{KL}}\left(P\left(X\mid Y\right)\parallel P_{U}(X)\right)\right]\end{aligned}}} is the number of bits which would have to be transmitted to identify X from N equally likely possibilities, less the relative entropy of the product distribution P U ( X ) P ( Y ) {\displaystyle P_{U}(X)P(Y)} from the true joint distribution P ( X , Y ) {\displaystyle P(X,Y)} — i.e. less the expected number of bits saved which would have had to be sent if the value of X were coded according to the uniform distribution P U ( X ) {\displaystyle P_{U}(X)} rather than the conditional distribution P ( X | Y ) {\displaystyle P(X|Y)} of X given Y . When we have a set of possible events, coming from the distribution p , we can encode them (with a lossless data compression ) using entropy encoding . This compresses the data by replacing each fixed-length input symbol with a corresponding unique, variable-length, prefix-free code (e.g.: the events (A, B, C) with probabilities p = (1/2, 1/4, 1/4) can be encoded as the bits (0, 10, 11)). If we know the distribution p in advance, we can devise an encoding that would be optimal (e.g.: using Huffman coding ). Meaning the messages we encode will have the shortest length on average (assuming the encoded events are sampled from p ), which will be equal to Shannon's Entropy of p (denoted as H ( p ) {\displaystyle \mathrm {H} (p)} ). However, if we use a different probability distribution ( q ) when creating the entropy encoding scheme, then a larger number of bits will be used (on average) to identify an event from a set of possibilities. This new (larger) number is measured by the cross entropy between p and q . The cross entropy between two probability distributions ( p and q ) measures the average number of bits needed to identify an event from a set of possibilities, if a coding scheme is used based on a given probability distribution q , rather than the "true" distribution p . The cross entropy for two distributions p and q over the same probability space is thus defined as follows. H ( p , q ) = E p ⁡ [ − log ⁡ q ] = H ( p ) + D KL ( p ∥ q ) . {\displaystyle \mathrm {H} (p,q)=\operatorname {E} _{p}[-\log q]=\mathrm {H} (p)+D_{\text{KL}}(p\parallel q).} For explicit derivation of this, see the Motivation section above. Under this scenario, relative entropies (kl-divergence) can be interpreted as the extra number of bits, on average, that are needed (beyond H ( p ) {\displaystyle \mathrm {H} (p)} ) for encoding the events because of using q for constructing the encoding scheme instead of p . In Bayesian statistics , relative entropy can be used as a measure of the information gain in moving from a prior distribution to a posterior distribution : p ( x ) → p ( x ∣ I ) {\displaystyle p(x)\to p(x\mid I)} . If some new fact Y = y {\displaystyle Y=y} is discovered, it can be used to update the posterior distribution for X from p ( x ∣ I ) {\displaystyle p(x\mid I)} to a new posterior distribution p ( x ∣ y , I ) {\displaystyle p(x\mid y,I)} using Bayes' theorem : p ( x ∣ y , I ) = p ( y ∣ x , I ) p ( x ∣ I ) p ( y ∣ I ) {\displaystyle p(x\mid y,I)={\frac {p(y\mid x,I)p(x\mid I)}{p(y\mid I)}}} This distribution has a new entropy : H ( p ( x ∣ y , I ) ) = − ∑ x p ( x ∣ y , I ) log ⁡ p ( x ∣ y , I ) , {\displaystyle \mathrm {H} {\big (}p(x\mid y,I){\big )}=-\sum _{x}p(x\mid y,I)\log p(x\mid y,I),} which may be less than or greater than the original entropy H ( p ( x ∣ I ) ) {\displaystyle \mathrm {H} (p(x\mid I))} . However, from the standpoint of the new probability distribution one can estimate that to have used the original code based on p ( x ∣ I ) {\displaystyle p(x\mid I)} instead of a new code based on p ( x ∣ y , I ) {\displaystyle p(x\mid y,I)} would have added an expected number of bits: D KL ( p ( x ∣ y , I ) ∥ p ( x ∣ I ) ) = ∑ x p ( x ∣ y , I ) log ⁡ p ( x ∣ y , I ) p ( x ∣ I ) {\displaystyle D_{\text{KL}}{\big (}p(x\mid y,I)\parallel p(x\mid I){\big )}=\sum _{x}p(x\mid y,I)\log {\frac {p(x\mid y,I)}{p(x\mid I)}}} to the message length. This therefore represents the amount of useful information, or information gain, about X , that has been learned by discovering Y = y {\displaystyle Y=y} . If a further piece of data, Y 2 = y 2 {\displaystyle Y_{2}=y_{2}} , subsequently comes in, the probability distribution for x can be updated further, to give a new best guess p ( x ∣ y 1 , y 2 , I ) {\displaystyle p(x\mid y_{1},y_{2},I)} . If one reinvestigates the information gain for using p ( x ∣ y 1 , I ) {\displaystyle p(x\mid y_{1},I)} rather than p ( x ∣ I ) {\displaystyle p(x\mid I)} , it turns out that it may be either greater or less than previously estimated: ∑ x p ( x ∣ y 1 , y 2 , I ) log ⁡ p ( x ∣ y 1 , y 2 , I ) p ( x ∣ I ) {\displaystyle \sum _{x}p(x\mid y_{1},y_{2},I)\log {\frac {p(x\mid y_{1},y_{2},I)}{p(x\mid I)}}} may be ≤ or > than ∑ x p ( x ∣ y 1 , I ) log ⁡ p ( x ∣ y 1 , I ) p ( x ∣ I ) {\textstyle \sum _{x}p(x\mid y_{1},I)\log {\frac {p(x\mid y_{1},I)}{p(x\mid I)}}} and so the combined information gain does not obey the triangle inequality: D KL ( p ( x ∣ y 1 , y 2 , I ) ∥ p ( x ∣ I ) ) {\displaystyle D_{\text{KL}}{\big (}p(x\mid y_{1},y_{2},I)\parallel p(x\mid I){\big )}} may be <, = or > than D KL ( p ( x ∣ y 1 , y 2 , I ) ∥ p ( x ∣ y 1 , I ) ) + D KL ( p ( x ∣ y 1 , I ) ∥ p ( x ∣ I ) ) {\displaystyle D_{\text{KL}}{\big (}p(x\mid y_{1},y_{2},I)\parallel p(x\mid y_{1},I){\big )}+D_{\text{KL}}{\big (}p(x\mid y_{1},I)\parallel p(x\mid I){\big )}} All one can say is that on average , averaging using p ( y 2 ∣ y 1 , x , I ) {\displaystyle p(y_{2}\mid y_{1},x,I)} , the two sides will average out. A common goal in Bayesian experimental design is to maximise the expected relative entropy between the prior and the posterior. [ 35 ] When posteriors are approximated to be Gaussian distributions, a design maximising the expected relative entropy is called Bayes d-optimal . Relative entropy D KL ( p ( x ∣ H 1 ) ∥ p ( x ∣ H 0 ) ) {\textstyle D_{\text{KL}}{\bigl (}p(x\mid H_{1})\parallel p(x\mid H_{0}){\bigr )}} can also be interpreted as the expected discrimination information for H 1 {\displaystyle H_{1}} over H 0 {\displaystyle H_{0}} : the mean information per sample for discriminating in favor of a hypothesis H 1 {\displaystyle H_{1}} against a hypothesis H 0 {\displaystyle H_{0}} , when hypothesis H 1 {\displaystyle H_{1}} is true. [ 36 ] Another name for this quantity, given to it by I. J. Good , is the expected weight of evidence for H 1 {\displaystyle H_{1}} over H 0 {\displaystyle H_{0}} to be expected from each sample. The expected weight of evidence for H 1 {\displaystyle H_{1}} over H 0 {\displaystyle H_{0}} is not the same as the information gain expected per sample about the probability distribution p ( H ) {\displaystyle p(H)} of the hypotheses, D KL ( p ( x ∣ H 1 ) ∥ p ( x ∣ H 0 ) ) ≠ I G = D KL ( p ( H ∣ x ) ∥ p ( H ∣ I ) ) . {\displaystyle D_{\text{KL}}(p(x\mid H_{1})\parallel p(x\mid H_{0}))\neq IG=D_{\text{KL}}(p(H\mid x)\parallel p(H\mid I)).} Either of the two quantities can be used as a utility function in Bayesian experimental design, to choose an optimal next question to investigate: but they will in general lead to rather different experimental strategies. On the entropy scale of information gain there is very little difference between near certainty and absolute certainty—coding according to a near certainty requires hardly any more bits than coding according to an absolute certainty. On the other hand, on the logit scale implied by weight of evidence, the difference between the two is enormous – infinite perhaps; this might reflect the difference between being almost sure (on a probabilistic level) that, say, the Riemann hypothesis is correct, compared to being certain that it is correct because one has a mathematical proof. These two different scales of loss function for uncertainty are both useful, according to how well each reflects the particular circumstances of the problem in question. The idea of relative entropy as discrimination information led Kullback to propose the Principle of Minimum Discrimination Information ( MDI ): given new facts, a new distribution f should be chosen which is as hard to discriminate from the original distribution f 0 {\displaystyle f_{0}} as possible; so that the new data produces as small an information gain D KL ( f ∥ f 0 ) {\displaystyle D_{\text{KL}}(f\parallel f_{0})} as possible. For example, if one had a prior distribution p ( x , a ) {\displaystyle p(x,a)} over x and a , and subsequently learnt the true distribution of a was u ( a ) {\displaystyle u(a)} , then the relative entropy between the new joint distribution for x and a , q ( x ∣ a ) u ( a ) {\displaystyle q(x\mid a)u(a)} , and the earlier prior distribution would be: D KL ( q ( x ∣ a ) u ( a ) ∥ p ( x , a ) ) = E u ( a ) ⁡ { D KL ( q ( x ∣ a ) ∥ p ( x ∣ a ) ) } + D KL ( u ( a ) ∥ p ( a ) ) , {\displaystyle D_{\text{KL}}(q(x\mid a)u(a)\parallel p(x,a))=\operatorname {E} _{u(a)}\left\{D_{\text{KL}}(q(x\mid a)\parallel p(x\mid a))\right\}+D_{\text{KL}}(u(a)\parallel p(a)),} i.e. the sum of the relative entropy of p ( a ) {\displaystyle p(a)} the prior distribution for a from the updated distribution u ( a ) {\displaystyle u(a)} , plus the expected value (using the probability distribution u ( a ) {\displaystyle u(a)} ) of the relative entropy of the prior conditional distribution p ( x ∣ a ) {\displaystyle p(x\mid a)} from the new conditional distribution q ( x ∣ a ) {\displaystyle q(x\mid a)} . (Note that often the later expected value is called the conditional relative entropy (or conditional Kullback–Leibler divergence ) and denoted by D KL ( q ( x ∣ a ) ∥ p ( x ∣ a ) ) {\displaystyle D_{\text{KL}}(q(x\mid a)\parallel p(x\mid a))} [ 3 ] [ 34 ] ) This is minimized if q ( x ∣ a ) = p ( x ∣ a ) {\displaystyle q(x\mid a)=p(x\mid a)} over the whole support of u ( a ) {\displaystyle u(a)} ; and we note that this result incorporates Bayes' theorem, if the new distribution u ( a ) {\displaystyle u(a)} is in fact a δ function representing certainty that a has one particular value. MDI can be seen as an extension of Laplace 's Principle of Insufficient Reason , and the Principle of Maximum Entropy of E.T. Jaynes . In particular, it is the natural extension of the principle of maximum entropy from discrete to continuous distributions, for which Shannon entropy ceases to be so useful (see differential entropy ), but the relative entropy continues to be just as relevant. In the engineering literature, MDI is sometimes called the Principle of Minimum Cross-Entropy (MCE) or Minxent for short. Minimising relative entropy from m to p with respect to m is equivalent to minimizing the cross-entropy of p and m , since H ( p , m ) = H ( p ) + D KL ( p ∥ m ) , {\displaystyle \mathrm {H} (p,m)=\mathrm {H} (p)+D_{\text{KL}}(p\parallel m),} which is appropriate if one is trying to choose an adequate approximation to p . However, this is just as often not the task one is trying to achieve. Instead, just as often it is m that is some fixed prior reference measure, and p that one is attempting to optimise by minimising D KL ( p ∥ m ) {\displaystyle D_{\text{KL}}(p\parallel m)} subject to some constraint. This has led to some ambiguity in the literature, with some authors attempting to resolve the inconsistency by redefining cross-entropy to be D KL ( p ∥ m ) {\displaystyle D_{\text{KL}}(p\parallel m)} , rather than H ( p , m ) {\displaystyle \mathrm {H} (p,m)} [ citation needed ] . Surprisals [ 37 ] add where probabilities multiply. The surprisal for an event of probability p is defined as s = − k ln ⁡ p {\displaystyle s=-k\ln p} . If k is { 1 , 1 / ln ⁡ 2 , 1.38 × 10 − 23 } {\displaystyle \left\{1,1/\ln 2,1.38\times 10^{-23}\right\}} then surprisal is in { {\displaystyle \{} nats, bits, or J / K } {\displaystyle J/K\}} so that, for instance, there are N bits of surprisal for landing all "heads" on a toss of N coins. Best-guess states (e.g. for atoms in a gas) are inferred by maximizing the average surprisal S ( entropy ) for a given set of control parameters (like pressure P or volume V ). This constrained entropy maximization , both classically [ 38 ] and quantum mechanically, [ 39 ] minimizes Gibbs availability in entropy units [ 40 ] A ≡ − k ln ⁡ Z {\displaystyle A\equiv -k\ln Z} where Z is a constrained multiplicity or partition function . When temperature T is fixed, free energy ( T × A {\displaystyle T\times A} ) is also minimized. Thus if T , V {\displaystyle T,V} and number of molecules N are constant, the Helmholtz free energy F ≡ U − T S {\displaystyle F\equiv U-TS} (where U is energy and S is entropy) is minimized as a system "equilibrates." If T and P are held constant (say during processes in your body), the Gibbs free energy G = U + P V − T S {\displaystyle G=U+PV-TS} is minimized instead. The change in free energy under these conditions is a measure of available work that might be done in the process. Thus available work for an ideal gas at constant temperature T o {\displaystyle T_{o}} and pressure P o {\displaystyle P_{o}} is W = Δ G = N k T o Θ ( V / V o ) {\displaystyle W=\Delta G=NkT_{o}\Theta (V/V_{o})} where V o = N k T o / P o {\displaystyle V_{o}=NkT_{o}/P_{o}} and Θ ( x ) = x − 1 − ln ⁡ x ≥ 0 {\displaystyle \Theta (x)=x-1-\ln x\geq 0} (see also Gibbs inequality ). More generally [ 41 ] the work available relative to some ambient is obtained by multiplying ambient temperature T o {\displaystyle T_{o}} by relative entropy or net surprisal Δ I ≥ 0 , {\displaystyle \Delta I\geq 0,} defined as the average value of k ln ⁡ ( p / p o ) {\displaystyle k\ln(p/p_{o})} where p o {\displaystyle p_{o}} is the probability of a given state under ambient conditions. For instance, the work available in equilibrating a monatomic ideal gas to ambient values of V o {\displaystyle V_{o}} and T o {\displaystyle T_{o}} is thus W = T o Δ I {\displaystyle W=T_{o}\Delta I} , where relative entropy Δ I = N k [ Θ ( V V o ) + 3 2 Θ ( T T o ) ] . {\displaystyle \Delta I=Nk\left[\Theta {\left({\frac {V}{V_{o}}}\right)}+{\frac {3}{2}}\Theta {\left({\frac {T}{T_{o}}}\right)}\right].} The resulting contours of constant relative entropy, shown at right for a mole of Argon at standard temperature and pressure, for example put limits on the conversion of hot to cold as in flame-powered air-conditioning or in the unpowered device to convert boiling-water to ice-water discussed here. [ 42 ] Thus relative entropy measures thermodynamic availability in bits. For density matrices P and Q on a Hilbert space , the quantum relative entropy from Q to P is defined to be D KL ( P ∥ Q ) = Tr ⁡ ( P ( log ⁡ P − log ⁡ Q ) ) . {\displaystyle D_{\text{KL}}(P\parallel Q)=\operatorname {Tr} (P(\log P-\log Q)).} In quantum information science the minimum of D KL ( P ∥ Q ) {\displaystyle D_{\text{KL}}(P\parallel Q)} over all separable states Q can also be used as a measure of entanglement in the state P . Just as relative entropy of "actual from ambient" measures thermodynamic availability, relative entropy of "reality from a model" is also useful even if the only clues we have about reality are some experimental measurements. In the former case relative entropy describes distance to equilibrium or (when multiplied by ambient temperature) the amount of available work , while in the latter case it tells you about surprises that reality has up its sleeve or, in other words, how much the model has yet to learn . Although this tool for evaluating models against systems that are accessible experimentally may be applied in any field, its application to selecting a statistical model via Akaike information criterion are particularly well described in papers [ 43 ] and a book [ 44 ] by Burnham and Anderson. In a nutshell the relative entropy of reality from a model may be estimated, to within a constant additive term, by a function of the deviations observed between data and the model's predictions (like the mean squared deviation ) . Estimates of such divergence for models that share the same additive term can in turn be used to select among models. When trying to fit parametrized models to data there are various estimators which attempt to minimize relative entropy, such as maximum likelihood and maximum spacing estimators. [ citation needed ] Kullback & Leibler (1951) also considered the symmetrized function: [ 6 ] D KL ( P ∥ Q ) + D KL ( Q ∥ P ) {\displaystyle D_{\text{KL}}(P\parallel Q)+D_{\text{KL}}(Q\parallel P)} which they referred to as the "divergence", though today the "KL divergence" refers to the asymmetric function (see § Etymology for the evolution of the term). This function is symmetric and nonnegative, and had already been defined and used by Harold Jeffreys in 1948; [ 7 ] it is accordingly called the Jeffreys divergence . This quantity has sometimes been used for feature selection in classification problems, where P and Q are the conditional pdfs of a feature under two different classes. In the Banking and Finance industries, this quantity is referred to as Population Stability Index ( PSI ), and is used to assess distributional shifts in model features through time. An alternative is given via the λ {\displaystyle \lambda } -divergence, D λ ( P ∥ Q ) = λ D KL ( P ∥ λ P + ( 1 − λ ) Q ) + ( 1 − λ ) D KL ( Q ∥ λ P + ( 1 − λ ) Q ) , {\displaystyle D_{\lambda }(P\parallel Q)=\lambda D_{\text{KL}}(P\parallel \lambda P+(1-\lambda )Q)+(1-\lambda )D_{\text{KL}}(Q\parallel \lambda P+(1-\lambda )Q),} which can be interpreted as the expected information gain about X from discovering which probability distribution X is drawn from, P or Q , if they currently have probabilities λ {\displaystyle \lambda } and 1 − λ {\displaystyle 1-\lambda } respectively. [ clarification needed ] [ citation needed ] The value λ = 0.5 {\displaystyle \lambda =0.5} gives the Jensen–Shannon divergence , defined by D JS = 1 2 D KL ( P ∥ M ) + 1 2 D KL ( Q ∥ M ) {\displaystyle D_{\text{JS}}={\tfrac {1}{2}}D_{\text{KL}}(P\parallel M)+{\tfrac {1}{2}}D_{\text{KL}}(Q\parallel M)} where M is the average of the two distributions, M = 1 2 ( P + Q ) . {\displaystyle M={\tfrac {1}{2}}\left(P+Q\right).} We can also interpret D JS {\displaystyle D_{\text{JS}}} as the capacity of a noisy information channel with two inputs giving the output distributions P and Q . The Jensen–Shannon divergence, like all f -divergences, is locally proportional to the Fisher information metric . It is similar to the Hellinger metric (in the sense that it induces the same affine connection on a statistical manifold ). Furthermore, the Jensen–Shannon divergence can be generalized using abstract statistical M-mixtures relying on an abstract mean M. [ 45 ] [ 46 ] There are many other important measures of probability distance . Some of these are particularly connected with relative entropy. For example: Other notable measures of distance include the Hellinger distance , histogram intersection , Chi-squared statistic , quadratic form distance , match distance , Kolmogorov–Smirnov distance , and earth mover's distance . [ 49 ] Just as absolute entropy serves as theoretical background for data compression , relative entropy serves as theoretical background for data differencing – the absolute entropy of a set of data in this sense being the data required to reconstruct it (minimum compressed size), while the relative entropy of a target set of data, given a source set of data, is the data required to reconstruct the target given the source (minimum size of a patch ).
https://en.wikipedia.org/wiki/Principle_of_Minimum_Discrimination_Information
In physics, the principle of covariance emphasizes the formulation of physical laws using only those physical quantities the measurements of which the observers in different frames of reference could unambiguously correlate. Mathematically, the physical quantities must transform covariantly , that is, under a certain representation of the group of coordinate transformations between admissible frames of reference of the physical theory. [ 1 ] This group is referred to as the covariance group . The principle of covariance does not require invariance of the physical laws under the group of admissible transformations, although in most cases the equations are actually invariant. However, in the theory of weak interactions , the equations are not invariant under reflections (but are, of course, still covariant). In Newtonian mechanics the admissible frames of reference are inertial frames with relative velocities much smaller than the speed of light . Time is then absolute, and the transformations between admissible frames of references are Galilean transformations , which (together with rotations, translations, and reflections) form the Galilean group . The covariant physical quantities are Euclidean scalars, vectors , and tensors . An example of a covariant equation is Newton's second law , m d v → d t = F → , {\displaystyle m{\frac {d{\vec {v}}}{dt}}={\vec {F}},} where the covariant quantities are the mass m {\displaystyle m} of a moving body (scalar), the velocity v → {\displaystyle {\vec {v}}} of the body (vector), the force F → {\displaystyle {\vec {F}}} acting on the body, and the invariant time t {\displaystyle t} . In special relativity the admissible frames of reference are all inertial frames. The transformations between frames are the Lorentz transformations , which (together with the rotations, translations, and reflections) form the Poincaré group . The covariant quantities are scalars , four-vectors etc., of the Minkowski space (and also more complicated objects like bispinors and others). An example of a covariant equation is the Lorentz force equation of motion of a charged particle in an electromagnetic field (a generalization of Newton's second law) [ citation needed ] m d u a d s = q F a b u b , {\displaystyle m{\frac {du^{a}}{ds}}=qF^{ab}u_{b},} where m {\displaystyle m} and q {\displaystyle q} are the mass and charge of the particle (invariant scalars); d s {\displaystyle ds} is the invariant interval (scalar); u a {\displaystyle u^{a}} is the 4-velocity (4-vector); and F a b {\displaystyle F^{ab}} is the electromagnetic field strength tensor (4-tensor). In general relativity , the admissible frames of reference are all reference frames . The transformations between frames are all arbitrary ( invertible and differentiable ) coordinate transformations. The covariant quantities are scalar fields , vector fields , tensor fields etc., defined on spacetime considered as a manifold . Main example of covariant equation is the Einstein field equations .
https://en.wikipedia.org/wiki/Principle_of_covariance
The principle of distributivity states that the algebraic distributive law is valid , where both logical conjunction and logical disjunction are distributive over each other so that for any propositions A , B and C the equivalences and hold. The principle of distributivity is valid in classical logic , but both valid and invalid in quantum logic . The article " Is Logic Empirical? " discusses the case that quantum logic is the correct, empirical logic, on the grounds that the principle of distributivity is inconsistent with a reasonable interpretation of quantum phenomena . [ 1 ] This mathematical logic -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Principle_of_distributivity
In classical logic , intuitionistic logic , and similar logical systems , the principle of explosion [ a ] [ b ] is the law according to which any statement can be proven from a contradiction . [ 1 ] [ 2 ] [ 3 ] That is, from a contradiction, any proposition (including its negation ) can be inferred; this is known as deductive explosion . [ 4 ] [ 5 ] The proof of this principle was first given by 12th-century French philosopher William of Soissons . [ 6 ] Due to the principle of explosion, the existence of a contradiction ( inconsistency ) in a formal axiomatic system is disastrous; since any statement can be proven, it trivializes the concepts of truth and falsity. [ 7 ] Around the turn of the 20th century, the discovery of contradictions such as Russell's paradox at the foundations of mathematics thus threatened the entire structure of mathematics. Mathematicians such as Gottlob Frege , Ernst Zermelo , Abraham Fraenkel , and Thoralf Skolem put much effort into revising set theory to eliminate these contradictions, resulting in the modern Zermelo–Fraenkel set theory . As a demonstration of the principle, consider two contradictory statements—"All lemons are yellow" and "Not all lemons are yellow"—and suppose that both are true. If that is the case, anything can be proven, e.g., the assertion that " unicorns exist", by using the following argument: In a different solution to the problems posed by the principle of explosion, some mathematicians have devised alternative theories of logic called paraconsistent logics , which allow some contradictory statements to be proven without affecting the truth value of (all) other statements. [ 7 ] In symbolic logic , the principle of explosion can be expressed schematically in the following way: [ 8 ] [ 9 ] Below is the Lewis argument , [ 10 ] a formal proof of the principle of explosion using symbolic logic . This proof was published by C. I. Lewis and is named after him, though versions of it were known to medieval logicians. [ 11 ] [ 12 ] [ 10 ] This is just the symbolic version of the informal argument given in the introduction, with P {\displaystyle P} standing for "all lemons are yellow" and Q {\displaystyle Q} standing for "Unicorns exist". We start out by assuming that (1) all lemons are yellow and that (2) not all lemons are yellow. From the proposition that all lemons are yellow, we infer that (3) either all lemons are yellow or unicorns exist. But then from this and the fact that not all lemons are yellow, we infer that (4) unicorns exist by disjunctive syllogism. An alternate argument for the principle stems from model theory . A sentence P {\displaystyle P} is a semantic consequence of a set of sentences Γ {\displaystyle \Gamma } only if every model of Γ {\displaystyle \Gamma } is a model of P {\displaystyle P} . However, there is no model of the contradictory set ( P ∧ ¬ P ) {\displaystyle (P\wedge \lnot P)} . A fortiori , there is no model of ( P ∧ ¬ P ) {\displaystyle (P\wedge \lnot P)} that is not a model of Q {\displaystyle Q} . Thus, vacuously, every model of ( P ∧ ¬ P ) {\displaystyle (P\wedge \lnot P)} is a model of Q {\displaystyle Q} . Thus Q {\displaystyle Q} is a semantic consequence of ( P ∧ ¬ P ) {\displaystyle (P\wedge \lnot P)} . Paraconsistent logics have been developed that allow for subcontrary -forming operators. Model-theoretic paraconsistent logicians often deny the assumption that there can be no model of { ϕ , ¬ ϕ } {\displaystyle \{\phi ,\lnot \phi \}} and devise semantical systems in which there are such models. Alternatively, they reject the idea that propositions can be classified as true or false. Proof-theoretic paraconsistent logics usually deny the validity of one of the steps necessary for deriving an explosion, typically including disjunctive syllogism , disjunction introduction , and reductio ad absurdum . The metamathematical value of the principle of explosion is that for any logical system where this principle holds, any derived theory which proves ⊥ (or an equivalent form, ϕ ∧ ¬ ϕ {\displaystyle \phi \land \lnot \phi } ) is worthless because all its statements would become theorems , making it impossible to distinguish truth from falsehood. That is to say, the principle of explosion is an argument for the law of non-contradiction in classical logic, because without it all truth statements become meaningless. Reduction in proof strength of logics without the principle of explosion are discussed in minimal logic .
https://en.wikipedia.org/wiki/Principle_of_explosion
In user interface design and software design , [ 1 ] the principle of least astonishment ( POLA ), also known as principle of least surprise , [ a ] proposes that a component of a system should behave in a way that most users will expect it to behave, and therefore not astonish or surprise users. The following is a corollary of the principle: "If a necessary feature has a high astonishment factor, it may be necessary to redesign the feature." [ 4 ] The principle has been in use in relation to computer interaction since at least the 1970s. Although first formalized in the field of computer technology, the principle can be applied broadly in other fields. For example, in writing , a cross-reference to another part of the work or a hyperlink should be phrased in a way that accurately tells the reader what to expect. An early reference to the "Law of Least Astonishment" appeared in the PL/I Bulletin in 1967 (PL/I is a programming language released by IBM in 1966). [ 5 ] By the late 1960s, PL/I had become infamous for violating the law, [ 6 ] for example because, due to PL/I's precision conversion rules, [ 7 ] the expressions 25 + 1/3 and 1/3 + 25 would either produce a fatal error, or, if errors were suppressed, 5.33333333333 instead of the correct 25.33333333333. [ 8 ] [ 9 ] [ 10 ] [ 11 ] The law appeared written out in full in 1972: For those parts of the system which cannot be adjusted to the peculiarities of the user, the designers of a systems programming language should obey the “Law of Least Astonishment.” In short, this law states that every construct in the system should behave exactly as its syntax suggests. Widely accepted conventions should be followed whenever possible, and exceptions to previously established rules of the language should be minimal. [ 12 ] A textbook formulation is: "People are part of the system. The design should match the user's experience, expectations, and mental models ." [ 13 ] The principle aims to leverage the existing knowledge of users to minimize the learning curve , for instance by designing interfaces that borrow heavily from "functionally similar or analogous programs with which your users are likely to be familiar". [ 2 ] User expectations in this respect may be closely related to a particular computing platform or tradition . For example, Unix command line programs are expected to follow certain conventions with respect to switches , [ 2 ] and widgets of Microsoft Windows programs are expected to follow certain conventions with respect to keyboard shortcuts . [ 14 ] In more abstract settings like an API , the expectation that function or method names intuitively match their behavior is another example. [ 15 ] This practice also involves the application of sensible defaults . [ 4 ] When two elements of an interface conflict, or are ambiguous, the behavior should be that which will least surprise the user ; in particular a programmer should try to think of the behavior that will least surprise someone who uses the program, rather than that behavior that is natural from knowing the inner workings of the program. [ 4 ] The choice of "least surprising" behavior can depend on the expected audience (for example, end users , programmers , or system administrators ). [ 2 ] Websites offering keyboard shortcuts often allow pressing ? to see the available shortcuts. Examples include Gmail , [ 16 ] YouTube , [ 17 ] and Jira . [ 18 ] In Windows operating systems and some desktop environments for Linux , the F1 function key typically opens the help program for an application . A similar keyboard shortcut in macOS is ⌘ Command + ⇧ Shift + / . Users expect a help window or context menu when they press the usual help shortcut key(s). Software that instead uses this shortcut for another feature is likely to cause astonishment if no help appears. [ 19 ] A programming language 's standard library usually provides a function similar to the pseudocode ParseInteger(string, radix) , which creates a machine-readable integer from a string of human-readable digits . The radix conventionally defaults to 10, meaning the string is interpreted as decimal (base 10). This function usually supports other bases, like binary (base 2) and octal (base 8), but only when they are specified explicitly. In a departure from this convention, JavaScript originally defaulted to base 8 for strings beginning with "0", causing developer confusion and software bugs . [ 20 ] This was discouraged in ECMAScript 3 and dropped in ECMAScript 5. [ 21 ] Some development communities like FreeBSD [ 22 ] use POLA as one of the guidelines for what makes an unsurprising user experience.
https://en.wikipedia.org/wiki/Principle_of_least_astonishment
In organic chemistry , the principle of least motion is the hypothesis that when multiple species with different nuclear structures could theoretically form as products of a given chemical reaction , the more likely to form tends to be the one requiring the least amount of change in nuclear structure or the smallest change in nuclear positions. [ 1 ] This organic chemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Principle_of_least_motion
The principle of maximum caliber ( MaxCal ) or maximum path entropy principle , suggested by E. T. Jaynes , [ 1 ] can be considered as a generalization of the principle of maximum entropy . It postulates that the most unbiased probability distribution of paths is the one that maximizes their Shannon entropy . This entropy of paths is sometimes called the "caliber" of the system, and is given by the path integral The principle of maximum caliber was proposed by Edwin T. Jaynes in 1980, [ 1 ] in an article titled The Minimum Entropy Production Principle in the context of deriving a principle for non-equilibrium statistical mechanics . The principle of maximum caliber can be considered as a generalization of the principle of maximum entropy defined over the paths space, the caliber S {\displaystyle S} is of the form where for n -constraints it is shown that the probability functional is In the same way, for n dynamical constraints defined in the interval t ∈ [ 0 , T ] {\displaystyle t\in [0,T]} of the form it is shown that the probability functional is Following Jaynes' hypothesis, there exist publications in which the principle of maximum caliber appears to emerge as a result of the construction of a framework which describes a statistical representation of systems with many degrees of freedom. [ 2 ] [ 3 ] [ 4 ]
https://en.wikipedia.org/wiki/Principle_of_maximum_caliber
In the history of science , the principle of maximum work was a postulate concerning the relationship between chemical reactions , heat evolution, and the potential work produced there from. The principle was developed in approximate form in 1875 by French chemist Marcellin Berthelot , in the field of thermochemistry , and then in 1876 by American mathematical physicist Willard Gibbs , in the field of thermodynamics , in a more accurate form. Berthelot's version was essentially: "every pure chemical reaction is accompanied by evolution of heat." (and that this yields the maximum amount of work). The effects of irreversibility , however, showed this version to be incorrect. This was rectified, in thermodynamics, by incorporating the concept of entropy . Berthelot independently enunciated a generalization (commonly known as Berthelot's Third Principle, or Principle of Maximum Work), which may be briefly stated as: every pure chemical reaction is accompanied by evolution of heat. Whilst this principle is undoubtedly applicable to the great majority of chemical actions under ordinary conditions, it is subject to numerous exceptions, and cannot therefore be taken (as its authors originally intended) as a secure basis for theoretical reasoning on the connection between thermal effect and chemical affinity. The existence of reactions which are reversible on slight alteration of conditions at once invalidates the principle, for if the action proceeding in one direction evolves heat, it must absorb heat when proceeding in the reverse direction. As the principle was abandoned even by its authors, it is now only of historical importance, although for many years it exerted considerable influence on thermochemical research. [ 1 ] Thus, to summarize, in 1875 by the French chemist Marcellin Berthelot which stated that chemical reactions will tend to yield the maximum amount of chemical energy in the form of work as the reaction progresses. In 1876, however, through the works of Willard Gibbs and others to follow, the work principle was found to be a particular case of a more general statement: For all thermodynamic processes between the same initial and final state, the delivery of work is a maximum for a reversible process. The principle of work was a precursor to the development of the thermodynamic concept of free energy . In thermodynamics , the Gibbs free energy or Helmholtz free energy is essentially the energy of a chemical reaction "free" or available to do external work. Historically, the "free energy" is a more advanced and accurate replacement for the thermochemistry term “ affinity ” used by chemists of olden days to describe the “force” that caused chemical reactions . The term dates back to at least the time of Albertus Magnus in 1250. According to Nobelist and chemical engineering professor Ilya Prigogine : “as motion was explained by the Newtonian concept of force, chemists wanted a similar concept of ‘driving force’ for chemical change? Why do chemical reactions occur, and why do they stop at certain points? Chemists called the ‘force’ that caused chemical reactions affinity, but it lacked a clear definition. [ 2 ] During the entire 18th century, the dominant view in regard to heat and light was that put forward by Isaac Newton , called the “Newtonian hypothesis”, which stated that light and heat are forms of matter attracted or repelled by other forms of matter, with forces analogous to gravitation or to chemical affinity. In the 19th century, the French chemist Marcellin Berthelot and the Danish chemist Julius Thomsen had attempted to quantify chemical affinity using heats of reaction . In 1875, after quantifying the heats of reaction for a large number of compounds, Berthelot proposed the “principle of maximum work” in which all chemical changes occurring without intervention of outside energy tend toward the production of bodies or of a system of bodies which liberate heat . With the development of the first two laws of thermodynamics in the 1850s and 60s, heats of reaction and the work associated with these processes were given a more accurate mathematical basis. In 1876, Willard Gibbs unified all of this in his 300-page "On the Equilibrium of Heterogeneous Substances". Suppose, for example, we have a general thermodynamic system , called the "primary" system and that we mechanically connect it to a "reversible work source". A reversible work source is a system which, when it does work, or has work done to it, does not change its entropy. It is therefore not a heat engine and does not suffer dissipation due to friction or heat exchanges. A simple example would be a frictionless spring, or a weight on a pulley in a gravitational field. Suppose further, that we thermally connect the primary system to a third system, a "reversible heat source". A reversible heat source may be thought of as a heat source in which all transformations are reversible. For such a source, the heat energy δQ added will be equal to the temperature of the source (T) times the increase in its entropy. (If it were an irreversible heat source, the entropy increase would be larger than δQ/T) Define: We may now make the following statements Eliminating d S w {\displaystyle dS_{w}} , δ Q {\displaystyle \delta Q} , and d S h {\displaystyle dS_{h}} gives the following equation: When the primary system is reversible, the equality will hold and the amount of work delivered will be a maximum. Note that this will hold for any reversible system which has the same values of dU and dS .
https://en.wikipedia.org/wiki/Principle_of_maximum_work
The principle of minimum energy is essentially a restatement of the second law of thermodynamics . It states that for a closed system , with constant external parameters and entropy , the internal energy will decrease and approach a minimum value at equilibrium. External parameters generally means the volume, but may include other parameters which are specified externally, such as a constant magnetic field. In contrast, for isolated systems (and fixed external parameters), the second law states that the entropy will increase to a maximum value at equilibrium. An isolated system has a fixed total energy and mass. A closed system, on the other hand, is a system which is connected to another, and cannot exchange matter (i.e. particles), but can transfer other forms of energy (e.g. heat), to or from the other system. If, rather than an isolated system, we have a closed system, in which the entropy rather than the energy remains constant, then it follows from the first and second laws of thermodynamics that the energy of that system will drop to a minimum value at equilibrium, transferring its energy to the other system. To restate: The total energy of the system is U ( S , X 1 , X 2 , … ) {\displaystyle U(S,X_{1},X_{2},\dots )} where S is entropy, and the X i {\displaystyle X_{i}} are the other extensive parameters of the system (e.g. volume, particle number , etc.). The entropy of the system may likewise be written as a function of the other extensive parameters as S ( U , X 1 , X 2 , … ) {\displaystyle S(U,X_{1},X_{2},\dots )} . Suppose that X is one of the X i {\displaystyle X_{i}} which varies as a system approaches equilibrium, and that it is the only such parameter which is varying. The principle of maximum entropy may then be stated as: The first condition states that entropy is at an extremum, and the second condition states that entropy is at a maximum. Note that for the partial derivatives, all extensive parameters are assumed constant except for the variables contained in the partial derivative, but only U , S , or X are shown. It follows from the properties of an exact differential (see equation 8 in the exact differential article) and from the energy/entropy equation of state that, for a closed system: It is seen that the energy is at an extremum at equilibrium. By similar but somewhat more lengthy argument it can be shown that which is greater than zero, showing that the energy is, in fact, at a minimum. Consider, for one, the familiar example of a marble on the edge of a bowl. If we consider the marble and bowl to be an isolated system, then when the marble drops, the potential energy will be converted to the kinetic energy of motion of the marble. Frictional forces will convert this kinetic energy to heat, and at equilibrium, the marble will be at rest at the bottom of the bowl, and the marble and the bowl will be at a slightly higher temperature. The total energy of the marble-bowl system will be unchanged. What was previously the potential energy of the marble, will now reside in the increased heat energy of the marble-bowl system. This will be an application of the maximum entropy principle as set forth in the principle of minimum potential energy, since due to the heating effects, the entropy has increased to the maximum value possible given the fixed energy of the system. If, on the other hand, the marble is lowered very slowly to the bottom of the bowl, so slowly that no heating effects occur (i.e. reversibly), then the entropy of the marble and bowl will remain constant, and the potential energy of the marble will be transferred as energy to the surroundings. The surroundings will maximize its entropy given its newly acquired energy, which is equivalent to the energy having been transferred as heat. Since the potential energy of the system is now at a minimum with no increase in the energy due to heat of either the marble or the bowl, the total energy of the system is at a minimum. This is an application of the minimum energy principle. Alternatively, suppose we have a cylinder containing an ideal gas, with cross sectional area A and a variable height x . Suppose that a weight of mass m has been placed on top of the cylinder. It presses down on the top of the cylinder with a force of mg where g is the acceleration due to gravity. Suppose that x is smaller than its equilibrium value. The upward force of the gas is greater than the downward force of the weight, and if allowed to freely move, the gas in the cylinder would push the weight upward rapidly, and there would be frictional forces that would convert the energy to heat. If we specify that an external agent presses down on the weight so as to very slowly (reversibly) allow the weight to move upward to its equilibrium position, then there will be no heat generated and the entropy of the system will remain constant while energy is transferred as work to the external agent. The total energy of the system at any value of x is given by the internal energy of the gas plus the potential energy of the weight: where T is temperature, S is entropy, P is pressure, μ is the chemical potential, N is the number of particles in the gas, and the volume has been written as V=Ax . Since the system is closed, the particle number N is constant and a small change in the energy of the system would be given by: Since the entropy is constant, we may say that dS =0 at equilibrium and by the principle of minimum energy, we may say that dU =0 at equilibrium, yielding the equilibrium condition: which simply states that the upward gas pressure force ( PA ) on the upper face of the cylinder is equal to the downward force of the mass due to gravitation ( mg ). The principle of minimum energy can be generalized to apply to constraints other than fixed entropy. For other constraints, other state functions with dimensions of energy will be minimized. These state functions are known as thermodynamic potentials . Thermodynamic potentials are at first glance just simple algebraic combinations of the energy terms in the expression for the internal energy. For a simple, multicomponent system, the internal energy may be written: where the intensive parameters (T, P, μ j ) are functions of the internal energy's natural variables ( S , V , { N j } ) {\displaystyle (S,V,\{N_{j}\})} via the equations of state. As an example of another thermodynamic potential, the Helmholtz free energy is written: where temperature has replaced entropy as a natural variable. In order to understand the value of the thermodynamic potentials, it is necessary to view them in a different light. They may in fact be seen as (negative) Legendre transforms of the internal energy, in which certain of the extensive parameters are replaced by the derivative of internal energy with respect to that variable (i.e. the conjugate to that variable). For example, the Helmholtz free energy may be written: and the minimum will occur when the variable T becomes equal to the temperature since The Helmholtz free energy is a useful quantity when studying thermodynamic transformations in which the temperature is held constant. Although the reduction in the number of variables is a useful simplification, the main advantage comes from the fact that the Helmholtz free energy is minimized at equilibrium with respect to any unconstrained internal variables for a closed system at constant temperature and volume. This follows directly from the principle of minimum energy which states that at constant entropy, the internal energy is minimized. This can be stated as: where U 0 {\displaystyle U_{0}} and S 0 {\displaystyle S_{0}} are the value of the internal energy and the (fixed) entropy at equilibrium. The volume and particle number variables have been replaced by x which stands for any internal unconstrained variables. As a concrete example of unconstrained internal variables, we might have a chemical reaction in which there are two types of particle, an A atom and an A 2 molecule. If N 1 {\displaystyle N_{1}} and N 2 {\displaystyle N_{2}} are the respective particle numbers for these particles, then the internal constraint is that the total number of A atoms N A {\displaystyle N_{A}} is conserved: we may then replace the N 1 {\displaystyle N_{1}} and N 2 {\displaystyle N_{2}} variables with a single variable x = N 1 / N 2 {\displaystyle x=N_{1}/N_{2}} and minimize with respect to this unconstrained variable. There may be any number of unconstrained variables depending on the number of atoms in the mixture. For systems with multiple sub-volumes, there may be additional volume constraints as well. The minimization is with respect to the unconstrained variables. In the case of chemical reactions this is usually the number of particles or mole fractions, subject to the conservation of elements. At equilibrium, these will take on their equilibrium values, and the internal energy U 0 {\displaystyle U_{0}} will be a function only of the chosen value of entropy S 0 {\displaystyle S_{0}} . By the definition of the Legendre transform, the Helmholtz free energy will be: The Helmholtz free energy at equilibrium will be: where T 0 {\displaystyle T_{0}} is the (unknown) temperature at equilibrium. Substituting the expression for U 0 {\displaystyle U_{0}} : By exchanging the order of the extrema: showing that the Helmholtz free energy is minimized at equilibrium. The Enthalpy and Gibbs free energy , are similarly derived.
https://en.wikipedia.org/wiki/Principle_of_minimum_energy
In physics , the principle of relativity is the requirement that the equations describing the laws of physics have the same form in all admissible frames of reference . For example, in the framework of special relativity , the Maxwell equations have the same form in all inertial frames of reference . In the framework of general relativity, the Maxwell equations or the Einstein field equations have the same form in arbitrary frames of reference. Several principles of relativity have been successfully applied throughout science , whether implicitly (as in Newtonian mechanics ) or explicitly (as in Albert Einstein 's special relativity and general relativity ). Certain principles of relativity have been widely assumed in most scientific disciplines. One of the most widespread is the belief that any law of nature should be the same at all times; and scientific investigations generally assume that laws of nature are the same regardless of the person measuring them. These sorts of principles have been incorporated into scientific inquiry at the most fundamental of levels. Any principle of relativity prescribes a symmetry in natural law: that is, the laws must look the same to one observer as they do to another. According to a theoretical result called Noether's theorem , any such symmetry will also imply a conservation law alongside. [ 1 ] [ 2 ] For example, if two observers at different times see the same laws, then a quantity called energy will be conserved . In this light, relativity principles make testable predictions about how nature behaves. According to the first postulate of the special theory of relativity: [ 3 ] Special principle of relativity : If a system of coordinates K is chosen so that, in relation to it, physical laws hold good in their simplest form, the same laws hold good in relation to any other system of coordinates K' moving in uniform translation relatively to K. This postulate defines an inertial frame of reference . The special principle of relativity states that physical laws should be the same in every inertial frame of reference , but that they may vary across non-inertial ones. This principle is used in both Newtonian mechanics and the theory of special relativity . Its influence in the latter is so strong that Max Planck named the theory after the principle. [ 4 ] The principle requires physical laws to be the same for any body moving at constant velocity as they are for a body at rest. A consequence is that an observer in an inertial reference frame cannot determine an absolute speed or direction of travel in space, and may only speak of speed or direction relative to some other object. The principle does not extend to non-inertial reference frames because those frames do not, in general experience, seem to abide by the same laws of physics. In classical physics , fictitious forces are used to describe acceleration in non-inertial reference frames. The special principle of relativity was first explicitly enunciated by Galileo Galilei in 1632 in his Dialogue Concerning the Two Chief World Systems , using the metaphor of Galileo's ship . Newtonian mechanics added to the special principle several other concepts, including laws of motion, gravitation, and an assertion of an absolute time . When formulated in the context of these laws, the special principle of relativity states that the laws of mechanics are invariant under a Galilean transformation . Joseph Larmor and Hendrik Lorentz discovered that Maxwell's equations , used in the theory of electromagnetism , were invariant only by a certain change of time and length units. This left some confusion among physicists, many of whom thought that a luminiferous aether was incompatible with the relativity principle, in the way it was defined by Henri Poincaré : The principle of relativity, according to which the laws of physical phenomena should be the same, whether for an observer fixed, or for an observer carried along in a uniform movement of translation; so that we have not and could not have any means of discerning whether or not we are carried along in such a motion. In their 1905 papers on electrodynamics , Henri Poincaré and Albert Einstein explained that with the Lorentz transformations the relativity principle holds perfectly. Einstein elevated the (special) principle of relativity to a postulate of the theory and derived the Lorentz transformations from this principle combined with the principle of the independence of the speed of light (in vacuum) from the motion of the source. These two principles were reconciled with each other by a re-examination of the fundamental meanings of space and time intervals. The strength of special relativity lies in its use of simple, basic principles, including the invariance of the laws of physics under a shift of inertial reference frames and the invariance of the speed of light in vacuum. (See also: Lorentz covariance .) It is possible to derive the form of the Lorentz transformations from the principle of relativity alone. Using only the isotropy of space and the symmetry implied by the principle of special relativity, one can show that the space-time transformations between inertial frames are either Galilean or Lorentzian. Whether the transformation is actually Galilean or Lorentzian must be determined with physical experiments. It is not possible to conclude that the speed of light c is invariant by mathematical logic alone. In the Lorentzian case, one can then obtain relativistic interval conservation and the constancy of the speed of light. [ 6 ] The general principle of relativity states: [ 7 ] All systems of reference are equivalent with respect to the formulation of the fundamental laws of physics. That is, physical laws are the same in all reference frames—inertial or non-inertial. An accelerated charged particle might emit synchrotron radiation , though a particle at rest does not. If we consider now the same accelerated charged particle in its non-inertial rest frame, it emits radiation at rest. Physics in non-inertial reference frames was historically treated by a coordinate transformation , first, to an inertial reference frame, performing the necessary calculations therein, and using another to return to the non-inertial reference frame. In most such situations, the same laws of physics can be used if certain predictable fictitious forces are added into consideration; an example is a uniformly rotating reference frame , which can be treated as an inertial reference frame if one adds a fictitious centrifugal force and Coriolis force into consideration. The problems involved are not always so trivial. Special relativity predicts that an observer in an inertial reference frame does not see objects he would describe as moving faster than the speed of light. However, in the non-inertial reference frame of Earth , treating a spot on the Earth as a fixed point, the stars are observed to move in the sky, circling once about the Earth per day. Since the stars are light years away, this observation means that, in the non-inertial reference frame of the Earth, anybody who looks at the stars is seeing objects which appear, to them, to be moving faster than the speed of light. Since non-inertial reference frames do not abide by the special principle of relativity, such situations are not self-contradictory . General relativity was developed by Einstein in the years 1907–1915. General relativity postulates that the global Lorentz covariance of special relativity becomes a local Lorentz covariance in the presence of matter. The presence of matter "curves" spacetime , and this curvature affects the path of free particles (and even the path of light). General relativity uses the mathematics of differential geometry and tensors in order to describe gravitation as an effect of the geometry of spacetime . Einstein based this new theory on the general principle of relativity and named the theory after the underlying principle. See the special relativity references and the general relativity references .
https://en.wikipedia.org/wiki/Principle_of_relativity
In biological nomenclature , the principle of typification is one of the guiding principles. [ 1 ] The International Code of Zoological Nomenclature provides that any named taxon in the family group, genus group, or species group have a name-bearing type which allows the name of the taxon to be objectively applied. The type does not define the taxon: that is done by a taxonomist; and an indefinite number of competing definitions can exist side by side. Rather, a type is a point of reference. A name has a type, and a taxonomist (having defined the taxon) can determine which existing types fall within the scope of the taxon. They can then use the rules in the Code to determine the valid name for the taxon.
https://en.wikipedia.org/wiki/Principle_of_typification
Principles of Mathematical Analysis , colloquially known as " PMA " or " Baby Rudin ," [ 1 ] is an undergraduate real analysis textbook written by Walter Rudin . Initially published by McGraw Hill in 1953, it is one of the most famous mathematics textbooks ever written. As a C. L. E. Moore instructor , Rudin taught the real analysis course at MIT in the 1951–1952 academic year. [ 2 ] [ 3 ] After he commented to W. T. Martin , who served as a consulting editor for McGraw Hill , that there were no textbooks covering the course material in a satisfactory manner, Martin suggested Rudin write one himself. After completing an outline and a sample chapter, he received a contract from McGraw Hill. He completed the manuscript in the spring of 1952, and it was published the year after. Rudin noted that in writing his textbook, his purpose was "to present a beautiful area of mathematics in a well-organized readable way, concisely, efficiently, with complete and correct proofs. It was an aesthetic pleasure to work on it." [ 2 ] The text was revised twice: first in 1964 (second edition) and then in 1976 (third edition). It has been translated into several languages, including Russian, Chinese, Spanish, French, German, Italian, Greek, Persian, Portuguese, and Polish. Rudin's text was the first modern English text on classical real analysis, and its organization of topics has been frequently imitated. [ 1 ] In Chapter 1, he constructs the real and complex numbers and outlines their properties. (In the third edition, the Dedekind cut construction is sent to an appendix for pedagogical reasons.) Chapter 2 discusses the topological properties of the real numbers as a metric space . The rest of the text covers topics such as continuous functions , differentiation , the Riemann–Stieltjes integral , sequences and series of functions (in particular uniform convergence ), and outlines examples such as power series , the exponential and logarithmic functions , the fundamental theorem of algebra , and Fourier series . After this single-variable treatment, Rudin goes in detail about real analysis in more than one dimension, with discussion of the implicit and inverse function theorems , differential forms , the generalized Stokes theorem , and the Lebesgue integral . [ 4 ]
https://en.wikipedia.org/wiki/Principles_of_Mathematical_Analysis
The Prins reaction is an organic reaction consisting of an electrophilic addition of an aldehyde or ketone to an alkene or alkyne followed by capture of a nucleophile or elimination of an H + ion. [ 1 ] [ 2 ] [ 3 ] The outcome of the reaction depends on reaction conditions. With water and a protic acid such as sulfuric acid as the reaction medium and formaldehyde the reaction product is a 1,3-diol ( 3 ). When water is absent, the cationic intermediate loses a proton to give an allylic alcohol ( 4 ). With an excess of formaldehyde and a low reaction temperature the reaction product is a dioxane ( 5 ). When water is replaced by acetic acid the corresponding esters are formed. The original reactants employed by Dutch chemist Hendrik Jacobus Prins [ de ] in his 1919 publication were styrene ( scheme 2 ), pinene , camphene , eugenol , isosafrole and anethole . These procedures have been optimized. [ 4 ] Hendrik Jacobus Prins discovered two new organic reactions during his doctoral research in the year of 1911–1912. The first one is the addition of polyhalogen compound to olefins and the second reaction is the acid catalyzed addition of aldehydes to olefin compounds. The early studies on Prins reaction are exploratory in nature and did not attract much attention until 1937. The development of petroleum cracking in 1937 increased the production of unsaturated hydrocarbons. As a consequence, commercial availability of lower olefin coupled with an aldehyde produced from oxidation of low boiling paraffin increased the curiosity to study the olefin-aldehyde condensation. Later on, Prins reaction emerged as a powerful C-O and C-C bond forming technique in the synthesis of various molecules in organic synthesis. [ 5 ] In 1937 the reaction was investigated as part of a quest for di-olefins to be used in synthetic rubber . The reaction mechanism for this reaction is depicted in scheme 5. The carbonyl reactant (2) is protonated by a protic acid and for the resulting oxonium ion 3 two resonance structures can be drawn. This electrophile engages in an electrophilic addition with the alkene to the carbocationic intermediate 4. Exactly how much positive charge is present on the secondary carbon atom in this intermediate should be determined for each reaction set. Evidence exists for neighbouring group participation of the hydroxyl oxygen or its neighboring carbon atom. When the overall reaction has a high degree of concertedness , the charge built-up will be modest. The three reaction modes open to this oxocarbenium intermediate are: Many variations of the Prins reaction exist because it lends itself easily to cyclization reactions and because it is possible to capture the oxo-carbenium ion with a large array of nucleophiles. The halo-Prins reaction is one such modification with replacement of protic acids and water by lewis acids such as stannic chloride and boron tribromide . The halogen is now the nucleophile recombining with the carbocation. The cyclization of certain allyl pulegones in scheme 7 with titanium tetrachloride in dichloromethane at −78 °C gives access to the decalin skeleton with the hydroxyl group and chlorine group predominantly in cis configuration (91% cis). [ 7 ] This observed cis diastereoselectivity is due to the intermediate formation of a trichlorotitanium alkoxide making possible an easy delivery of chlorine to the carbocation ion from the same face. The trans isomer is preferred (98% cis) when the switch is made to a tin tetrachloride reaction at room temperature . The Prins-pinacol reaction is a cascade reaction of a Prins reaction and a pinacol rearrangement . The carbonyl group in the reactant in scheme 8 [ 8 ] is masked as a dimethyl acetal and the hydroxyl group is masked as a triisopropylsilyl ether (TIPS). With lewis acid stannic chloride the oxonium ion is activated and the pinacol rearrangement of the resulting Prins intermediate results in ring contraction and referral of the positive charge to the TIPS ether which eventually forms an aldehyde group in the final product as a mixture of cis and trans isomers with modest diastereoselectivity. The key oxo-carbenium intermediate can be formed by other routes than simple protonation of a carbonyl. In a key step of the synthesis of exiguolide, it was formed by protonation of a vinylogous ester: [ 9 ]
https://en.wikipedia.org/wiki/Prins_reaction
Print-through is a generally undesirable effect that arises in the use of magnetic tape for storing analog information, in particular music , caused by contact transfer of signal patterns from one layer of tape to another as it sits wound concentrically on a reel. Print-through is a category of noise caused by contact transfer of signal patterns from one layer of tape to another after it is wound onto a reel. Print-through can take two forms: The former is unstable over time and can be easily erased by rewinding a tape and letting it sit so that the patterns formed by the contact of upper and lower layers begin to erase each other and form new patterns with the repositioning of upper/lower layers after rewinding. This type of contact printing begins immediately after a recording and increases over time at a rate dependent on the temperature of the storage conditions. Depending on tape formulation and type, a maximum level will be reached after a certain length of time, if it is not further disturbed physically or magnetically. The audibility of print noise caused by contact printing depends on a number of factors: Tape speed is a factor because of the shift in wavelengths. For example, the strongest print signal on a C-60 cassette running at 1.875 inches per second (4.76 cm/s) is about 426 Hz (605 Hz for a C-90), while an open-reel tape recorded at 7.5 inches per second (19 cm/s) would have its strongest signal at 630 Hz if the tape were a professional tape with a 1.5 mils (38 μm) base film or 852 Hz if the tape were a consumer version with a base film of 1.0 mil (25 μm) thickness. The cause of print-through is due to an imbalance of magnetic and thermal energy in the magnetic particle. Once the magnetic energy is only 25 times greater than the thermal energy, the particle becomes unstable enough to be influenced by flux energy from the layer above or below the tape. The amount of magnetic energy depends on the coercivity of the particles, their shapes (long, thin particles make stronger "magnets"), the ratio of ideally shaped particles to defective particles, and their crystalline structures. Metal particles, although very small, have very high values of coercivity and are the most resistant to print-through effects because their magnetic energy is seldom challenged by thermal energy. Particles fractured by excessive milling prior to coating will increase levels of print depending on their ratio compared to their well-formed neighboring particles. Anhysteretic print signals are almost as strong as intentionally recorded signals and are much more difficult to erase. This type of print noise is relatively rare because users are typically careful about accidentally exposing recordings to strong magnetic fields, and the magnetic influence of such fields decreases with distance. Digital tapes can also be affected by contact print effects in a phenomenon known as "bit-shift" when upper or lower layers of tape cause a middle layer to alter the pulses recorded to represent binary information. Since analog video is recorded by frequency-modulation of the video signal, the FM capture effect shields the signal against this noise; however, the linear audio and (depending on format) chrominance signals of a video cassette may have some print effects. While print-through is a form of unwanted noise, contact printing was used deliberately for high-speed recording (duplication, high speed en masse copying) of video tape, instead of having to record thousands of tapes on thousands of VCRs at normal playback speed, or recording the source material repeatedly in real time to large reels (without end caps) of tape (called pancakes) over 48 hours long to be inserted into cassettes. [ 2 ] DuPont [ 3 ] in conjunction with Otari [ 4 ] invented a form of thermal magnetic duplication ("TMD") by which a high-coercivity metal mother master tape was brought into direct contact with a chromium dioxide copy (slave) tape. The coercivity of the mother tape is higher than that of the copy tape, so when the copy tape is heated and brought into contact with the mother tape, the copy tape gets a mirror image of the signal on the mother tape without the mother tape losing its signal. The recording on the mother tape was a mirror image of a valid video signal. Immediately before the copy tape came into contact with the mother tape, a focused laser beam heated it to its Curie point at which its value of coercivity dropped to very low values so that it picked up a near perfect copy of the mother tape as it cooled. [ 5 ] [ 6 ] The mother tape was made using a special reel to reel video tape recorder called a mirror master recorder [ 7 ] and was held inside the machine in an endless loop. This system could achieve speeds of up to 300 times playback speed in NTSC VHS SP mode, 900 times in VHS EP mode and 428 times in PAL/SECAM tapes. [ 8 ] Sony developed a system known as "Sprinter" that used a similar mother master tape forced into close contact with any blank copy tape using compressed air and run across a rotating transfer head in which a weak AC high frequency sine wave is used to transfer the information anhysteretically to the copy tape with minimal erasure of the mother tape on each pass. The sprinter does not use a laser to heat the copy tape which saves on power consumption. The transfer head may have a vacuum cleaner to reduce dropout caused by dust. This system was used to quickly duplicate VHS tapes at speeds of up to 240 times faster than playback speed for NTSC and 342 times for PAL/SECAM video signals without having to use expensive chrome dioxide tape; the tape was fed into the sprinter at a speed of 8 meters per second. The mother tape was enclosed in a space (not in a reel, but rather in an endless loop) in the Sprinter; this was made possible by a horizontal vibrating tape feed system where the edge of the endless loop tape sits in a table that diagonally vibrates using vibration generated by piezoelectric elements and amplified using mechanical oscillation, causing the tape in the table to move forward. The copy tape was unwound, recorded using the mother tape, then wound onto large reels (called pancakes) containing enough tape for several VHS cassettes. The mother tape had a coercivity three times that of normal VHS tape and was made by recording onto it using a special reel to reel video tape recorder called a mirror mother VTR using video from a D-2 (video) , Type B videotape or Type C videotape master source tape. The video tape recorder had a sapphire blade to clean the surface of the mother tape, reducing dropout caused by dust. Sprinter mother tapes did suffer enough loss that they had to be replaced after a number of passes. [ 9 ] The master had to be replaced every 1000 copies. This form of high-speed recording was very cost effective when recording in the EP (extra long play) mode because it was three times faster than recording in SP (standard play) mode while real-time recording took the same amount of time whether in EP mode that used less tape or SP mode that used a greater amount of tape. High-speed video recording of EP video produced far more consistent results than real-time recording at the slowest VHS speed. After duplication, the copy tape was loaded into video tape loaders that wound the tape into empty VHS cassette shells that contained only leader tape. [ 10 ]
https://en.wikipedia.org/wiki/Print-through
PrintNightmare is a critical security vulnerability affecting the Microsoft Windows operating system. [ 2 ] [ 5 ] The vulnerability occurred within the print spooler service. [ 6 ] [ 7 ] There were two variants, one permitting remote code execution (CVE-2021-34527), and the other leading to privilege escalation (CVE-2021-1675). [ 7 ] [ 8 ] A third vulnerability (CVE-2021-34481) was announced July 15, 2021, and upgraded to remote code execution by Microsoft in August. [ 9 ] [ 10 ] On July 6, 2021, Microsoft started releasing out-of-band (unscheduled) patches attempting to address the vulnerability. [ 11 ] Due to its severity, Microsoft released patches for Windows 7 , for which support had ended in January 2020. [ 11 ] [ 12 ] The patches resulted in some printers ceasing to function. [ 13 ] [ 14 ] Researchers have noted that the vulnerability has not been fully addressed by the patches. [ 15 ] After the patch is applied, only administrator accounts on Windows print server will be able to install printer drivers. [ 16 ] Part of the vulnerability related to the ability of non-administrators to install printer drivers on the system, such as shared printers on system without sharing password protection. [ 16 ] The organization which discovered the vulnerability, Sangfor, published a proof of concept in a public GitHub repository. [ 3 ] [ 17 ] Apparently published in error, or as a result of a miscommunication between the researchers and Microsoft, the proof of concept was deleted shortly after. [ 3 ] [ 18 ] However, several copies have since appeared online. [ 3 ] This computer security article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/PrintNightmare
Print Screen (often abbreviated Print Scrn , Prnt Scrn , Prnt Scr , Prt Scrn , Prt Scn , Prt Scr , Prt Sc , Pr Sc , or PS ) is a key present on most PC keyboards . It is typically situated in the same section as the break key and scroll lock key. The print screen may share the same key as system request . Under command-line based operating systems such as MS-DOS , this key causes the contents of the current text mode screen memory buffer to be copied to the standard printer port , usually LPT1. In essence, whatever is currently on the screen when the key is pressed will be printed. Pressing the Ctrl key in combination with Prt Sc turns on and off the "printer echo" feature. When echo is in effect, any conventional text output to the screen will be copied ("echoed") to the printer. There is also a Unicode character for print screen, U+2399 ⎙ PRINT SCREEN SYMBOL . Newer-generation operating systems using a graphical interface tend to save a bitmap image of the current screen, or screenshot , to their clipboard or comparable storage area. Some shells allow modification of the exact behavior using modifier keys such as the control key . In Microsoft Windows , pressing Prt Sc will capture the entire screen, [ 1 ] while pressing the Alt key in combination with Prt Sc will capture the currently selected window. [ 1 ] The captured image can then be pasted into an editing program such as a graphics program or even a word processor . Pressing Prt Sc with both the left Alt key and left ⇧ Shift pressed turns on a high contrast mode (this keyboard shortcut can be turned off by the user). [ 2 ] Since Windows 8, pressing the ⊞ Win key in combination with Prt Sc (and optionally in addition to the Alt key) will save the captured image to disk (the default pictures location). [ 3 ] This behavior is therefore backward compatible with users who learned Print Screen actions under operating systems such as MS-DOS . In Windows 10, the Prt Sc key can be configured to open the 'New' function of the Snip & Sketch tool. This allows the user to take a full screen, specific window, or defined area screenshot and copy it to clipboard. This behaviour can be enabled by going to Snip & Sketch, accessing Settings via the menu and enabling the 'Use the PrtScn button to open screen snipping'. In KDE and GNOME , very similar shortcuts are available, which open a screenshot tool (Spectacle [ 4 ] or GNOME Screenshot respectively), giving options to save the screenshot, plus more options like manually picking a specific window, screen area, using a timeout, etc. Sending the image to many services (KDE), or even screen recording (GNOME), is built-in too. [ 5 ] Macintosh does not use a print screen key; instead, key combinations are used that start with ⌘ Cmd + ⇧ Shift . These key combinations are used to provide more functionality including the ability to select screen objects. ⌘ Cmd + ⇧ Shift + 3 captures the whole screen, while ⌘ Cmd + ⇧ Shift + 4 allows for part of the screen to be selected. The standard print screen functions described above save the image to the desktop . However, using any of the key sequences described above, but additionally pressing the Ctrl will modify the behavior to copy the image to the system clipboard instead. On the IBM Model F keyboard, the key is labeled PrtSc and is located under ↵ Enter . On the IBM Model M , it is located next to F12 and is labeled Print Screen.
https://en.wikipedia.org/wiki/Print_Screen
In computing , a print job is a file or set of files that has been submitted to be printed with a printer . Jobs are typically identified by a unique number, and are assigned to a particular destination, usually a printer . Jobs can also have options associated with them such as media size, number of copies and priority. A Print Job is a single queueable print system object that represents a document that needs to be rendered and transferred to a printer. Printer jobs are created on specific print queues and can not be transferred between print queues. Job Id: Uniquely identifies the print job for the given print queue. Spool file: It is responsible for the on-disk representation of data. Shadow File: It is responsible for the on-disk representation of the job configuration. Status: We can describe this in three parts: Data Type: It Identifies the format of the data in the spool file like EMF, RAW. Other configuration: Name, set of named properties, etc In larger environments, print jobs may go through a centralized print server , before reaching the printing destination. Some (multifunction) printers have local storage (like a hard disk drive ) to process and queue the jobs before printing. When getting rid of old printers with local storage, one should keep in mind that confidential print jobs (documents) are potentially still locally unencrypted on the hard disk drive and can be undeleted . [ 1 ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Print_job
A printed circuit board ( PCB ), also called printed wiring board ( PWB ), is a laminated sandwich structure of conductive and insulating layers, each with a pattern of traces, planes and other features (similar to wires on a flat surface) etched from one or more sheet layers of copper laminated onto or between sheet layers of a non-conductive substrate. [ 1 ] PCBs are used to connect or "wire" components to one another in an electronic circuit . Electrical components may be fixed to conductive pads on the outer layers, generally by soldering , which both electrically connects and mechanically fastens the components to the board. Another manufacturing process adds vias , metal-lined drilled holes that enable electrical interconnections between conductive layers, to boards with more than a single side. Printed circuit boards are used in nearly all electronic products today. Alternatives to PCBs include wire wrap and point-to-point construction , both once popular but now rarely used. PCBs require additional design effort to lay out the circuit, but manufacturing and assembly can be automated. Electronic design automation software is available to do much of the work of layout. Mass-producing circuits with PCBs is cheaper and faster than with other wiring methods, as components are mounted and wired in one operation. Large numbers of PCBs can be fabricated at the same time, and the layout has to be done only once. PCBs can also be made manually in small quantities, with reduced benefits. [ 2 ] PCBs can be single-sided (one copper layer), double-sided (two copper layers on both sides of one substrate layer), or multi-layer (stacked layers of substrate with copper plating sandwiched between each and on the outside layers). Multi-layer PCBs provide much higher component density, because circuit traces on the inner layers would otherwise take up surface space between components. The rise in popularity of multilayer PCBs with more than two, and especially with more than four, copper planes was concurrent with the adoption of surface-mount technology . However, multilayer PCBs make repair, analysis, and field modification of circuits much more difficult and usually impractical. The world market for bare PCBs exceeded US$ 60.2 billion in 2014, [ 3 ] and was estimated at $80.33 billion in 2024, forecast to be $96.57 billion for 2029, growing at 4.87% per annum. [ 4 ] Before the development of printed circuit boards, electrical and electronic circuits were wired point-to-point on a chassis. Typically, the chassis was a sheet metal frame or pan, sometimes with a wooden bottom. Components were attached to the chassis, usually by insulators when the connecting point on the chassis was metal, and then their leads were connected directly or with jumper wires by soldering , or sometimes using crimp connectors, wire connector lugs on screw terminals, or other methods. Circuits were large, bulky, heavy, and relatively fragile (even discounting the breakable glass envelopes of the vacuum tubes that were often included in the circuits), and production was labor-intensive, so the products were expensive. Development of the methods used in modern printed circuit boards started early in the 20th century. In 1903, a German inventor, Albert Hanson, described flat foil conductors laminated to an insulating board, in multiple layers. Thomas Edison experimented with chemical methods of plating conductors onto linen paper in 1904. Arthur Berry in 1913 patented a print-and- etch method in the UK, and in the United States Max Schoop obtained a patent [ 5 ] to flame-spray metal onto a board through a patterned mask. Charles Ducas in 1925 patented a method of electroplating circuit patterns. [ 6 ] Predating the printed circuit invention, and similar in spirit, was John Sargrove 's 1936–1947 Electronic Circuit Making Equipment (ECME) that sprayed metal onto a Bakelite plastic board. The ECME could produce three radio boards per minute. The Austrian engineer Paul Eisler invented the printed circuit as part of a radio set while working in the UK around 1936. In 1941 a multi-layer printed circuit was used in German magnetic influence naval mines . Around 1943 the United States began to use the technology on a large scale to make proximity fuzes for use in World War II. [ 6 ] Such fuzes required an electronic circuit that could withstand being fired from a gun, and could be produced in quantity. The Centralab Division of Globe Union submitted a proposal which met the requirements: a ceramic plate would be screenprinted with metallic paint for conductors and carbon material for resistors , with ceramic disc capacitors and subminiature vacuum tubes soldered in place. [ 7 ] The technique proved viable, and the resulting patent on the process, which was classified by the U.S. Army, was assigned to Globe Union. It was not until 1984 that the Institute of Electrical and Electronics Engineers (IEEE) awarded Harry W. Rubinstein its Cledo Brunetti Award for early key contributions to the development of printed components and conductors on a common insulating substrate. Rubinstein was honored in 1984 by his alma mater, the University of Wisconsin-Madison , for his innovations in the technology of printed electronic circuits and the fabrication of capacitors. [ 8 ] [ 9 ] This invention also represents a step in the development of integrated circuit technology, as not only wiring but also passive components were fabricated on the ceramic substrate. In 1948, the US released the invention for commercial use. Printed circuits did not become commonplace in consumer electronics until the mid-1950s, after the Auto-Sembly process was developed by the United States Army. At around the same time in the UK work along similar lines was carried out by Geoffrey Dummer , then at the RRDE . Motorola was an early leader in bringing the process into consumer electronics, announcing in August 1952 the adoption of "plated circuits" in home radios after six years of research and a $1M investment. [ 10 ] Motorola soon began using its trademarked term for the process, PLAcir, in its consumer radio advertisements. [ 11 ] Hallicrafters released its first "foto-etch" printed circuit product, a clock-radio, on November 1, 1952. [ 12 ] Even as circuit boards became available, the point-to-point chassis construction method remained in common use in industry (such as TV and hi-fi sets) into at least the late 1960s. Printed circuit boards were introduced to reduce the size, weight, and cost of parts of the circuitry. In 1960, a small consumer radio receiver might be built with all its circuitry on one circuit board, but a TV set would probably contain one or more circuit boards. Originally, every electronic component had wire leads , and a PCB had holes drilled for each wire of each component. The component leads were then inserted through the holes and soldered to the copper PCB traces. This method of assembly is called through-hole construction . In 1949, Moe Abramson and Stanislaus F. Danko of the United States Army Signal Corps developed the Auto-Sembly process in which component leads were inserted into a copper foil interconnection pattern and dip soldered . The patent they obtained in 1956 was assigned to the U.S. Army. [ 13 ] With the development of board lamination and etching techniques, this concept evolved into the standard printed circuit board fabrication process in use today. Soldering could be done automatically by passing the board over a ripple, or wave, of molten solder in a wave-soldering machine. However, the wires and holes are inefficient since drilling holes is expensive and consumes drill bits and the protruding wires are cut off and discarded. Since the 1980s, surface mount parts have increasingly replaced through-hole components, enabling smaller boards and lower production costs, but making repairs more challenging. In the 1990s the use of multilayer surface boards became more frequent. As a result, size was further minimized and both flexible and rigid PCBs were incorporated in different devices. In 1995 PCB manufacturers began using microvia technology to produce High-Density Interconnect (HDI) PCBs. [ 14 ] Recent advances in 3D printing have meant that there are several new techniques in PCB creation. 3D printed electronics (PEs) can be utilized to print items layer by layer and subsequently the item can be printed with a liquid ink that contains electronic functionalities. HDI (High Density Interconnect) technology allows for a denser design on the PCB and thus potentially smaller PCBs with more traces and components in a given area. As a result, the paths between components can be shorter. HDIs use blind/buried vias, or a combination that includes microvias. With multi-layer HDI PCBs the interconnection of several vias stacked on top of each other (stacked vías, instead of one deep buried via) can be made stronger, thus enhancing reliability in all conditions. The most common applications for HDI technology are computer and mobile phone components as well as medical equipment and military communication equipment. A 4-layer HDI microvia PCB is equivalent in quality to an 8-layer through-hole PCB, so HDI technology can reduce costs. HDI PCBs are often made using build-up film such as ajinomoto build-up film, which is also used in the production of flip chip packages. [ 15 ] [ 16 ] Some PCBs have optical waveguides, similar to optical fibers built on the PCB. [ 17 ] A basic PCB consists of a flat sheet of insulating material and a layer of copper foil , laminated to the substrate. Chemical etching divides the copper into separate conducting lines called tracks or circuit traces , pads for connections, vias to pass connections between layers of copper, and features such as solid conductive areas for electromagnetic shielding or other purposes. The tracks function as wires fixed in place, and are insulated from each other by air and the board substrate material. The surface of a PCB may have a coating that protects the copper from corrosion and reduces the chances of solder shorts between traces or undesired electrical contact with stray bare wires. For its function in helping to prevent solder shorts, the coating is called solder resist or solder mask . The pattern to be etched into each copper layer of a PCB is called the "artwork". The etching is usually done using photoresist which is coated onto the PCB, then exposed to light projected in the pattern of the artwork. The resist material protects the copper from dissolution into the etching solution. The etched board is then cleaned. A PCB design can be mass-reproduced in a way similar to the way photographs can be mass-duplicated from film negatives using a photographic printer . FR-4 glass epoxy is the most common insulating substrate. Another substrate material is cotton paper impregnated with phenolic resin , often tan or brown. When a PCB has no components installed, it is less ambiguously called a printed wiring board ( PWB ) or etched wiring board . [ 18 ] However, the term "printed wiring board" has fallen into disuse. A PCB populated with electronic components is called a printed circuit assembly ( PCA ), printed circuit board assembly or PCB assembly ( PCBA ). In informal usage, the term "printed circuit board" most commonly means "printed circuit assembly" (with components). The IPC preferred term for an assembled board is circuit card assembly ( CCA ), [ 19 ] and for an assembled backplane it is backplane assembly . "Card" is another widely used informal term for a "printed circuit assembly". For example, expansion card . A PCB may be printed with a legend identifying the components, test points , or identifying text. Originally, silkscreen printing was used for this purpose, but today other, finer quality printing methods are usually used. Normally the legend does not affect the function of a PCBA. A printed circuit board can have multiple layers of copper which almost always are arranged in pairs. The number of layers and the interconnection designed between them (vias, PTHs) provide a general estimate of the board complexity. Using more layers allow for more routing options and better control of signal integrity, but are also time-consuming and costly to manufacture. Likewise, selection of the vias for the board also allow fine tuning of the board size, escaping of signals off complex ICs, routing, and long term reliability, but are tightly coupled with production complexity and cost. One of the simplest boards to produce is the two-layer board. It has copper on both sides that are referred to as external layers; multi layer boards sandwich additional internal layers of copper and insulation. After two-layer PCBs, the next step up is the four-layer. The four layer board adds significantly more routing options in the internal layers as compared to the two layer board, and often some portion of the internal layers is used as ground plane or power plane, to achieve better signal integrity, higher signaling frequencies, lower EMI, and better power supply decoupling. In multi-layer boards, the layers of material are laminated together in an alternating sandwich: copper, substrate, copper, substrate, copper, etc.; each plane of copper is etched, and any internal vias (that will not extend to both outer surfaces of the finished multilayer board) are plated-through, before the layers are laminated together. Only the outer layers need be coated; the inner copper layers are protected by the adjacent substrate layers. "Through hole" components are mounted by their wire leads passing through the board and soldered to traces on the other side. "Surface mount" components are attached by their leads to copper traces on the same side of the board. A board may use both methods for mounting components. PCBs with only through-hole mounted components are now uncommon. Surface mounting is used for transistors , diodes , IC chips , resistors , and capacitors. Through-hole mounting may be used for some large components such as electrolytic capacitors and connectors. The first PCBs used through-hole technology , mounting electronic components by lead inserted through holes on one side of the board and soldered onto copper traces on the other side. Boards may be single-sided, with an unplated component side, or more compact double-sided boards, with components soldered on both sides. Horizontal installation of through-hole parts with two axial leads (such as resistors, capacitors, and diodes) is done by bending the leads 90 degrees in the same direction, inserting the part in the board (often bending leads located on the back of the board in opposite directions to improve the part's mechanical strength), soldering the leads, and trimming off the ends. Leads may be soldered either manually or by a wave soldering machine. [ 20 ] Surface-mount technology emerged in the 1960s, gained momentum in the early 1980s, and became widely used by the mid-1990s. Components were mechanically redesigned to have small metal tabs or end caps that could be soldered directly onto the PCB surface, instead of wire leads to pass through holes. Components became much smaller and component placement on both sides of the board became more common than with through-hole mounting, allowing much smaller PCB assemblies with much higher circuit densities. Surface mounting lends itself well to a high degree of automation, reducing labor costs and greatly increasing production rates compared with through-hole circuit boards. Components can be supplied mounted on carrier tapes. Surface mount components can be about one-quarter to one-tenth of the size and weight of through-hole components, and passive components much cheaper. However, prices of semiconductor surface mount devices (SMDs) are determined more by the chip itself than the package, with little price advantage over larger packages, and some wire-ended components, such as 1N4148 small-signal switch diodes, are actually significantly cheaper than SMD equivalents. Each trace consists of a flat, narrow part of the copper foil that remains after etching. Its resistance , determined by its width, thickness, and length, must be sufficiently low for the current the conductor will carry. Power and ground traces may need to be wider than signal traces . In a multi-layer board one entire layer may be mostly solid copper to act as a ground plane for shielding and power return. For microwave circuits, transmission lines can be laid out in a planar form such as stripline or microstrip with carefully controlled dimensions to assure a consistent impedance . In radio-frequency and fast switching circuits the inductance and capacitance of the printed circuit board conductors become significant circuit elements, usually undesired; conversely, they can be used as a deliberate part of the circuit design, as in distributed-element filters , antennae , and fuses , obviating the need for additional discrete components. High density interconnects (HDI) PCBs have tracks or vias with a width or diameter of under 152 micrometers. [ 21 ] Laminates are manufactured by curing layers of cloth or paper with thermoset resin under pressure and heat to form an integral final piece of uniform thickness. They can be up to 4 by 8 feet (1.2 by 2.4 m) in width and length. Varying cloth weaves (threads per inch or cm), cloth thickness, and resin percentage are used to achieve the desired final thickness and dielectric characteristics. Available standard laminate thickness are listed in ANSI/IPC-D-275. [ 22 ] The cloth or fiber material used, resin material, and the cloth to resin ratio determine the laminate's type designation (FR-4, CEM -1, G-10 , etc.) and therefore the characteristics of the laminate produced. Important characteristics are the level to which the laminate is fire retardant , the dielectric constant (e r ), the loss tangent (tan δ), the tensile strength , the shear strength , the glass transition temperature (T g ), and the Z-axis expansion coefficient (how much the thickness changes with temperature). There are quite a few different dielectrics that can be chosen to provide different insulating values depending on the requirements of the circuit. Some of these dielectrics are polytetrafluoroethylene (Teflon), FR-4, FR-1, CEM-1 or CEM-3. Well known pre-preg materials used in the PCB industry are FR-2 (phenolic cotton paper), FR-3 (cotton paper and epoxy), FR-4 (woven glass and epoxy), FR-5 (woven glass and epoxy), FR-6 (matte glass and polyester), G-10 (woven glass and epoxy), CEM-1 (cotton paper and epoxy), CEM-2 (cotton paper and epoxy), CEM-3 (non-woven glass and epoxy), CEM-4 (woven glass and epoxy), CEM-5 (woven glass and polyester). Thermal expansion is an important consideration especially with ball grid array (BGA) and naked die technologies, and glass fiber offers the best dimensional stability. FR-4 is by far the most common material used today. The board stock with unetched copper on it is called "copper-clad laminate". With decreasing size of board features and increasing frequencies, small non-homogeneities like uneven distribution of fiberglass or other filler, thickness variations, and bubbles in the resin matrix, and the associated local variations in the dielectric constant, are gaining importance. The circuit-board substrates are usually dielectric composite materials. The composites contain a matrix (usually an epoxy resin ) and a reinforcement (usually a woven, sometimes non-woven, glass fibers, sometimes even paper), and in some cases a filler is added to the resin (e.g. ceramics; titanate ceramics can be used to increase the dielectric constant). The reinforcement type defines two major classes of materials: woven and non-woven. Woven reinforcements are cheaper, but the high dielectric constant of glass may not be favorable for many higher-frequency applications. The spatially non-homogeneous structure also introduces local variations in electrical parameters, due to different resin/glass ratio at different areas of the weave pattern. Non-woven reinforcements, or materials with low or no reinforcement, are more expensive but more suitable for some RF/analog applications. The substrates are characterized by several key parameters, chiefly thermomechanical ( glass transition temperature , tensile strength , shear strength , thermal expansion ), electrical ( dielectric constant , loss tangent , dielectric breakdown voltage , leakage current , tracking resistance ...), and others (e.g. moisture absorption). At the glass transition temperature the resin in the composite softens and significantly increases thermal expansion; exceeding T g then exerts mechanical overload on the board components - e.g. the joints and the vias. Below T g the thermal expansion of the resin roughly matches copper and glass, above it gets significantly higher. As the reinforcement and copper confine the board along the plane, virtually all volume expansion projects to the thickness and stresses the plated-through holes. Repeated soldering or other exposition to higher temperatures can cause failure of the plating, especially with thicker boards; thick boards therefore require a matrix with a high T g . The materials used determine the substrate's dielectric constant . This constant is also dependent on frequency, usually decreasing with frequency. As this constant determines the signal propagation speed , frequency dependence introduces phase distortion in wideband applications; as flat a dielectric constant vs frequency characteristics as is achievable is important here. The impedance of transmission lines decreases with frequency, therefore faster edges of signals reflect more than slower ones. Dielectric breakdown voltage determines the maximum voltage gradient the material can be subjected to before suffering a breakdown (conduction, or arcing, through the dielectric). Tracking resistance determines how the material resists high voltage electrical discharges creeping over the board surface. Loss tangent determines how much of the electromagnetic energy from the signals in the conductors is absorbed in the board material. This factor is important for high frequencies. Low-loss materials are more expensive. Choosing unnecessarily low-loss material is a common engineering error in high-frequency digital design; it increases the cost of the boards without a corresponding benefit. Signal degradation by loss tangent and dielectric constant can be easily assessed by an eye pattern . Moisture absorption occurs when the material is exposed to high humidity or water. Both the resin and the reinforcement may absorb water; water also may be soaked by capillary forces through voids in the materials and along the reinforcement. Epoxies of the FR-4 materials are not too susceptible, with absorption of only 0.15%. Teflon has very low absorption of 0.01%. Polyimides and cyanate esters, on the other side, suffer from high water absorption. Absorbed water can lead to significant degradation of key parameters; it impairs tracking resistance, breakdown voltage, and dielectric parameters. Relative dielectric constant of water is about 73, compared to about 4 for common circuit board materials. Absorbed moisture can also vaporize on heating, as during soldering , and cause cracking and delamination , [ 23 ] the same effect responsible for "popcorning" damage on wet packaging of electronic parts. Careful baking of the substrates may be required to dry them prior to soldering. [ 24 ] Often encountered materials: Less-often encountered materials: Copper thickness of PCBs can be specified directly or as the weight of copper per area (in ounce per square foot) which is easier to measure. One ounce per square foot is 1.344 mils or 34 micrometers thickness (0.001344 inches). Heavy copper is a layer exceeding three ounces of copper per ft 2 , or approximately 4.2 mils (105 μm) (0.0042 inches) thick. Heavy copper layers are used for high current or to help dissipate heat. [ citation needed ] On the common FR-4 substrates, 1 oz copper per ft 2 (35 μm) is the most common thickness; 2 oz (70 μm) and 0.5 oz (17.5 μm) thickness is often an option. Less common are 12 and 105 μm, 9 μm is sometimes available on some substrates. Flexible substrates typically have thinner metalization. Metal-core boards for high power devices commonly use thicker copper; 35 μm is usual but also 140 and 400 μm can be encountered. In the US, copper foil thickness is specified in units of ounces per square foot (oz/ft 2 ), commonly referred to simply as ounce . Common thicknesses are 1/2 oz/ft 2 (150 g/m 2 ), 1 oz/ft 2 (300 g/m 2 ), 2 oz/ft 2 (600 g/m 2 ), and 3 oz/ft 2 (900 g/m 2 ). These work out to thicknesses of 17.05 μm (0.67 thou ), 34.1 μm (1.34 thou ), 68.2 μm (2.68 thou), and 102.3 μm (4.02 thou), respectively. 1/2 oz/ft 2 foil is not widely used as a finished copper weight, but is used for outer layers when plating for through holes will increase the finished copper weight Some PCB manufacturers refer to 1 oz/ft 2 copper foil as having a thickness of 35 μm (may also be referred to as 35 μ, 35 micron , or 35 mic). Printed circuit board manufacturing involves manufacturing bare printed circuit boards and then populating them with electronic components. In large-scale board manufacturing, multiple PCBs are grouped on a single panel for efficient processing. After assembly, they are separated ( depaneled ). A minimal PCB for a single component, used for prototyping , is called a breakout board . The purpose of a breakout board is to "break out" the leads of a component on separate terminals so that manual connections to them can be made easily. Breakout boards are especially used for surface-mount components or any components with fine lead pitch. Advanced PCBs may contain components embedded in the substrate, such as capacitors and integrated circuits, to reduce the amount of space taken up by components on the surface of the PCB while improving electrical characteristics. [ 32 ] Multiwire is a patented technique of interconnection which uses machine-routed insulated wires embedded in a non-conducting matrix (often plastic resin). [ 33 ] It was used during the 1980s and 1990s. As of 2010, [update] Multiwire is still available through Hitachi. Since it was quite easy to stack interconnections (wires) inside the embedding matrix, the approach allowed designers to forget completely about the routing of wires (usually a time-consuming operation of PCB design): Anywhere the designer needs a connection, the machine will draw a wire in a straight line from one location/pin to another. This led to very short design times (no complex algorithms to use even for high density designs) as well as reduced crosstalk (which is worse when wires run parallel to each other—which almost never happens in Multiwire), though the cost is too high to compete with cheaper PCB technologies when large quantities are needed. Corrections can be made to a Multiwire board layout more easily than to a PCB layout. [ 34 ] Cordwood construction can save significant space and was often used with wire-ended components in applications where space was at a premium (such as fuzes , missile guidance, and telemetry systems) and in high-speed computers , where short traces were important. In cordwood construction, axial-leaded components were mounted between two parallel planes. The name comes from the way axial-lead components (capacitors, resistors, coils, and diodes) are stacked in parallel rows and columns, like a stack of firewood. The components were either soldered together with jumper wire or they were connected to other components by thin nickel ribbon welded at right angles onto the component leads. [ 35 ] To avoid shorting together different interconnection layers, thin insulating cards were placed between them. Perforations or holes in the cards allowed component leads to project through to the next interconnection layer. One disadvantage of this system was that special nickel -leaded components had to be used to allow reliable interconnecting welds to be made. Differential thermal expansion of the component could put pressure on the leads of the components and the PCB traces and cause mechanical damage (as was seen in several modules on the Apollo program). Additionally, components located in the interior are difficult to replace. Some versions of cordwood construction used soldered single-sided PCBs as the interconnection method (as pictured), allowing the use of normal-leaded components at the cost of being difficult to remove the boards or replace any component that is not at the edge. Before the advent of integrated circuits , this method allowed the highest possible component packing density; because of this, it was used by a number of computer vendors including Control Data Corporation . Printed circuit boards have been used as an alternative to their typical use for electronic and biomedical engineering thanks to the versatility of their layers, especially the copper layer. PCB layers have been used to fabricate sensors, such as capacitive pressure sensors and accelerometers, actuators such as microvalves and microheaters, as well as platforms of sensors and actuators for Lab-on-a-chip (LoC), for example to perform polymerase chain reaction (PCR), and fuel cells, to name a few. [ 36 ] Manufacturers may not support component-level repair of printed circuit boards because of the relatively low cost to replace compared with the time and cost of troubleshooting to a component level. In board-level repair, the technician identifies the board (PCA) on which the fault resides and replaces it. This shift is economically efficient from a manufacturer's point of view but is also materially wasteful, as a circuit board with hundreds of functional components may be discarded and replaced due to the failure of one minor and inexpensive part, such as a resistor or capacitor, and this practice is a significant contributor to the problem of e-waste . [ 37 ] In many countries (including all European Single Market participants, [ 38 ] the United Kingdom , [ 39 ] Turkey , and China ), legislation restricts the use of lead , cadmium , and mercury in electrical equipment. PCBs sold in such countries must therefore use lead-free manufacturing processes and lead-free solder, and attached components must themselves be compliant. [ 40 ] [ 41 ] Safety Standard UL 796 covers component safety requirements for printed wiring boards for use as components in devices or appliances. Testing analyzes characteristics such as flammability, maximum operating temperature , electrical tracking, heat deflection, and direct support of live electrical parts.
https://en.wikipedia.org/wiki/Printed_circuit_board
Printer cable refers to the cable that carries data between a computer and a printer . There are many different types of cables, for example: Parallel port printers have been slowly phased out, and are now difficult to find for the most part, being considered as an obsolete legacy port on most new computers. Those who have printers and scanners with only parallel port may still be able to connect the devices via the use of USB adapters a.k.a. Parallel-to-USB cable, or use a PCI parallel printer port card. This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Printer_cable
The Prior Analytics ( Ancient Greek : Ἀναλυτικὰ Πρότερα ; Latin : Analytica Priora ) is a work by Aristotle on reasoning , known as syllogistic , composed around 350 BCE. [ 1 ] Being one of the six extant Aristotelian writings on logic and scientific method, it is part of what later Peripatetics called the Organon . The term analytics comes from the Greek words analytos (ἀναλυτός, 'solvable') and analyo (ἀναλύω, 'to solve', literally 'to loose'). However, in Aristotle's corpus, there are distinguishable differences in the meaning of ἀναλύω and its cognates. There is also the possibility that Aristotle may have borrowed his use of the word "analysis" from his teacher Plato . On the other hand, the meaning that best fits the Analytics is one derived from the study of Geometry and this meaning is very close to what Aristotle calls episteme (επιστήμη), knowing the reasoned facts. Therefore, Analysis is the process of finding the reasoned facts. [ 2 ] In the Analytics then, Prior Analytics is the first theoretical part dealing with the science of deduction and the Posterior Analytics is the second demonstratively practical part. Prior Analytics gives an account of deductions in general narrowed down to three basic syllogisms while Posterior Analytics deals with demonstration. [ 3 ] Aristotle's Prior Analytics represents the first time in history when Logic is scientifically investigated. On those grounds alone, Aristotle could be considered the Father of Logic for as he himself says in Sophistical Refutations , "When it comes to this subject, it is not the case that part had been worked out before in advance and part had not; instead, nothing existed at all." [ 4 ] In the third century AD, Alexander of Aphrodisias 's commentary on the Prior Analytics is the oldest extant and one of the best of the ancient tradition and is available in the English language. [ 5 ] In the sixth century, Boethius composed the first known Latin translation of the Prior Analytics , however, this translation has not survived, and the Prior Analytics may have been unavailable in Western Europe until the eleventh century, when it was quoted from by Bernard of Utrecht . [ 6 ] The so-called Anonymus Aurelianensis III from the second half of the twelfth century is the first extant Latin commentary, or rather fragment of a commentary. [ 7 ] Modern work on Aristotle's logic builds on the tradition started in 1951 with the establishment by Jan Łukasiewicz of a revolutionary paradigm. His approach was replaced in the early 1970s in a series of papers by John Corcoran and Timothy Smiley [ 8 ] —which inform modern translations of Prior Analytics by Robin Smith in 1989 and Gisela Striker in 2009. [ 9 ] A problem in meaning arises in the study of Prior Analytics for the word syllogism as used by Aristotle in general does not carry the same narrow connotation as it does at present; Aristotle defines this term in a way that would apply to a wide range of valid arguments . In the Prior Analytics , Aristotle defines syllogism as "a deduction in a discourse in which, certain things being supposed, something different from the things supposed results of necessity because these things are so." In modern times, this definition has led to a debate as to how the word "syllogism" should be interpreted. At present, syllogism is used exclusively as the method used to reach a conclusion closely resembling the "syllogisms" of traditional logic texts: two premises followed by a conclusion each of which is a categorical sentence containing all together three terms, two extremes which appear in the conclusion and one middle term which appears in both premises but not in the conclusion. Some scholars prefer to use the word "deduction" instead as the meaning given by Aristotle to the Greek word syllogismos (συλλογισμός). Scholars Jan Lukasiewicz , Józef Maria Bocheński and Günther Patzig have sided with the Protasis - Apodosis dichotomy while John Corcoran prefers to consider a syllogism as simply a deduction. [ 10 ] Greek text Translations Studies
https://en.wikipedia.org/wiki/Prior_Analytics
Priority is a principle in biological taxonomy by which a valid scientific name is established based on the oldest available name. It is a decisive rule in botanical and zoological nomenclature to recognise the first binomial name (also called binominal name in zoology) given to an organism as the correct and acceptable name. [ 1 ] [ 2 ] The purpose is to select one scientific name as a stable one out of two or more alternate names that often exist for a single species. [ 3 ] [ 4 ] The International Code of Nomenclature for algae, fungi, and plants (ICN) defines it as: "A right to precedence established by the date of valid publication of a legitimate name or of an earlier homonym, or by the date of designation of a type." [ 5 ] Basically, it is a scientific procedure to eliminate duplicate or multiple names for a species, for which Lucien Marcus Underwood called it "the principle of outlaw in nomenclature". [ 6 ] The principle of priority has not always been in place. When Carl Linnaeus laid the foundations of modern nomenclature, he offered no recognition of prior names. The botanists who followed him were just as willing to overturn Linnaeus's names. The first sign of recognition of priority came in 1813, when A. P. de Candolle laid out some principles of good nomenclatural practice. He favoured retaining prior names, but left wide scope for overturning poor prior names. [ 9 ] During the 19th century, the principle gradually came to be accepted by almost all botanists, but debate continued to rage over the conditions under which the principle might be ignored. Botanists on one side of the debate argued that priority should be universal and without exception. This would have meant a one-off major disruption as countless names in current usage were overturned in favour of archaic prior names. In 1891, Otto Kuntze , one of the most vocal proponents of this position, did just that, publishing over 30000 new combinations in his Revisio Generum Plantarum . [ 9 ] He then followed with further such publications in 1893, 1898 and 1903. [ 9 ] His efforts, however, were so disruptive that they appear to have benefited his opponents. By the 1900s, the need for a mechanism for the conservation of names was widely accepted, and details of such a mechanism were under discussion. The current system of "modified priority" was essentially put in place at the Cambridge Congress of 1930. [ 9 ] By the 19th century, the Linnaean binomial system was generally adopted by zoologists. In doing so, many zoologists tried to dig up the oldest possible scientific names, as a result of which proper and consistent names prevailing at the time, including those by the eminent zoologists like Louis Agassiz , Georges Cuvier , Charles Darwin , Thomas Huxley , Richard Owen , etc. came to be challenged. Scientific organisations tried to establish practical rules for changing names, but not a uniform system. [ 10 ] The first zoological code with priority rule was initially formulated in 1842 by a committee appointed by the British Association . The committee included Charles Darwin, John Stevens Henslow , Leonard Jenyns , William Ogilby , John O. Westwood , John Phillips , Ralph Richardson and Hugh Edwin Strickland . The first meeting was at Darwin's house in London. [ 11 ] The committee's report, written by Strickland, was implemented as the Rules of Zoological Nomenclature, [ 12 ] and popularly known as the Stricklandian Code . [ 13 ] It was not endorsed by all zoologists, as it allowed naming, renaming, and reclassifying with relative ease, as Science reported: "The worst feature of this abuse is not so much the bestowal of unknown names of well-known creatures as the transfer of one to another." [ 10 ] In zoology, the principle of priority is defined by the International Code of Zoological Nomenclature (4th edition, 1999 ) in its article 23: The valid name of a taxon is the oldest available name applied to it, unless that name has been invalidated or another name is given precedence by any provision of the Code or by any ruling of the Commission [the International Commission on Zoological Nomenclature ]. For this reason priority applies to the validity of synonyms [Art. 23.3], to the relative precedence of homonyms [Arts. 53-60], the correctness or otherwise of spellings [Arts. 24, 32], and to the validity of nomenclatural acts (such as acts taken under the Principle of the First Reviser [Art. 24.2] and the fixation of name-bearing types [Arts. 68, 69, 74.1.3, 75.4]). [ 14 ] There are exceptions: another name may be given precedence by any provision of the Code or by any ruling of the Commission. According to the ICZN preamble: Priority of publication is a basic principle of zoological nomenclature; however, under conditions prescribed in the Code its application may be modified to conserve a long-accepted name in its accustomed meaning. When stability of nomenclature is threatened in an individual case, the strict application of the Code may under specified conditions be suspended by the International Commission on Zoological Nomenclature. [ 15 ] In botany, the principle if defined by the Shenzhen Code (or the International Code of Nomenclature for algae, fungi, and plants ) in 2017 in its article 11: Each family or lower-ranked taxon with a particular circumscription, position, and rank can bear only one correct name. Special exceptions are made for nine families and one subfamily for which alternative names are permitted (see Art. 18.5 and 19.8). The use of separate names is allowed for fossil-taxa that represent different parts, life-history stages, or preservational states of what may have been a single organismal taxon or even a single individual (Art. 1.2). [ 16 ] Priority has two aspects: Note that nomenclature for botany and zoology is independent, and the rules of priority regarding homonyms operate within each discipline but not between them. Thus, an animal and a plant can bear the same name, which is then called a hemi homonym . There are formal provisions for making exceptions to the principle of priority under each of the Codes. If an archaic or obscure prior name is discovered for an established taxon, the current name can be declared a nomen conservandum (botany) or conserved name (zoology), and so conserved against the prior name. Conservation may be avoided entirely in zoology, as these names may fall in the formal category of nomen oblitum . Similarly, if the current name for a taxon is found to have an archaic or obscure prior homonym , the current name can be declared a nomen protectum (zoology) or the older name suppressed, becoming a nomen rejiciendum (botany). In botany and horticulture, the principle of priority applies to names at the rank of family and below. [ 17 ] [ 18 ] When moves are made to another genus or from one species to another, the "final epithet" of the name is combined with the new genus name, with any adjustments necessary for Latin grammar, for example: In zoology, the principle of priority applies to names between the rank of superfamily and subspecies (not to varieties, which are below the rank of subspecies). [ 25 ] Also unlike in botany, the authorship of new combinations is not tracked, and only the original authority is ever cited. Example:
https://en.wikipedia.org/wiki/Priority_(biology)
In real-time computing , the priority ceiling protocol is a synchronization protocol for shared resources to avoid unbounded priority inversion and mutual deadlock due to wrong nesting of critical sections . In this protocol each resource is assigned a priority ceiling, which is a priority equal to the highest priority of any task which may lock the resource. The protocol works by temporarily raising the priorities of tasks in certain situations, thus it requires a scheduler that supports dynamic priority scheduling . [ 1 ] There are two variants of the protocol: Original Ceiling Priority Protocol ( OCPP ) and Immediate Ceiling Priority Protocol ( ICPP ). The worst-case behaviour of the two ceiling schemes is identical from a scheduling view point. Both variants work by temporarily raising the priorities of tasks. [ 2 ] In OCPP, a task X's priority is raised when a higher-priority task Y tries to acquire a resource that X has locked. The task's priority is then raised to the highest priority had been blocked by itself, ensuring that task X quickly finishes its critical section, unlocking the resource. A task is only allowed to lock a resource if its dynamic priority is higher than the priority ceilings of all resources locked by other tasks. Otherwise the task becomes blocked, waiting for the resource. [ 2 ] In ICPP, a task's priority is immediately raised when it locks a resource. The task's priority is set to the priority ceiling of the resource, thus no task that may lock the resource is able to get scheduled. This ensures the OCPP property that "A task can only lock a resource if its dynamic priority is higher than the priority ceilings of all resources locked by other tasks". [ 2 ] ICPP is called "Ceiling Locking" in Ada , "Priority Protect Protocol" in POSIX and "Priority Ceiling Emulation" in RTSJ . [ 3 ] It is also known as "Highest Locker's Priority Protocol" (HLP). [ 4 ]
https://en.wikipedia.org/wiki/Priority_ceiling_protocol
In ecology , a priority effect refers to the impact that a particular species can have on community development as a result of its prior arrival at a site. [ 1 ] [ 2 ] [ 3 ] There are two basic types of priority effects: inhibitory and facilitative. An inhibitory priority effect occurs when a species that arrives first at a site negatively affects a species that arrives later by reducing the availability of space or resources. In contrast, a facilitative priority effect occurs when a species that arrives first at a site alters abiotic or biotic conditions in ways that positively affect a species that arrives later. [ 3 ] [ 4 ] Inhibitory priority effects have been documented more frequently than facilitative priority effects. [ citation needed ] Studies indicate that both abiotic (e.g., resource availability) and biotic (e.g., predation ) factors can affect the strength of priority effects. [ citation needed ] . Priority effects are a central and pervasive element of ecological community development that have significant implications for natural systems and ecological restoration efforts. [ 3 ] [ 5 ] [ citation needed ] Early in the 20th century, Frederic Clements and other plant ecologists suggested that ecological communities develop in a linear, directional manner towards a final, stable endpoint: the climax community . [ 3 ] Clements indicated that a site's climax community would reflect local climate . He conceptualized the climax community as a " superorganism " that followed a defined developmental sequence. [ 2 ] Early ecological succession theory maintained that the directional shifts from one stage of succession to the next were induced by the plants themselves. [ 1 ] In this sense, succession theory implicitly recognized priority effects; the prior arrival of certain species had important impacts on future community composition. At the same time, the climax concept implied that species shifts were predetermined. This implies that a given species would always appear at the same point during the development of the climax community and have a predictable impact on community development. This static view of priority effects remained essentially unchanged by the concept of patch dynamics , introduced by Alex Watt in 1947. [ 4 ] Watt conceived of plant communities as dynamic "mechanisms" that followed predetermined succession cycles . He viewed succession as a process driven by facilitation, in which each species made local conditions more suitable for another species. In 1926, Henry Gleason presented an alternative hypothesis in which plants were conceptualized as individuals rather than components of a superorganism. [ 5 ] This hypothesis suggested that the distribution of various species across the landscape reflected species-specific dispersal limitations and environmental requirements rather than predetermined associations among species. Gleason contested the idea of a predetermined climax community, recognizing that different colonizing species could produce alternative trajectories of community development. For example, initially identical ponds colonized by different species could develop through succession into very different communities. The Initial Floristic Composition model was put forward by Frank Egler to describe community development in abandoned agricultural fields . [ 6 ] According to this model, the set of species present in a field immediately after abandonment had strong influences on community development and final community composition. [ 7 ] In the 1970s, it was suggested that natural communities could be characterized by multiple or alternative stable states . [ 8 ] [ 9 ] [ 10 ] Multiple stable state models suggested that the same environment could support several combinations of species. [ 5 ] [ 6 ] Theorists argued that historical context could play a central role in determining which stable state would be present at any given time. Robert May explained, "If there is a unique stable state, historical accidents are unimportant; if there are many alternative locally stable states, historical accidents can be of overriding significance." [ 10 ] Assembly theory explains community development processes in the context of multiple stable states: it asks why a particular type of community developed when other stable community types are possible. In contrast to succession theory, assembly theory was developed largely by animal ecologists and explicitly incorporated historical context. [ 7 ] In 1975, Jared Diamond [ 11 ] developed quantitative "assembly rules" to predict avian community composition on an archipelago . This approach emphasizes historical contingency and multiple stable states. Although the idea of deterministic community assembly initially drew criticism, [ 12 ] the approach continued to gain support. [ 10 ] [ 13 ] In 1991, Drake used an assembly model to demonstrate that different community types result from different sequences of species invasions. [ 14 ] In this model, early invaders have major impacts on the invasion success of species that arrive later. Other modelling studies suggested that priority effects may be especially important when invasion frequency is low enough to allow species to become established before replacement, [ 15 ] or when other factors that could drive assembly (e.g., competition, abiotic stress) are relatively unimportant. [ 16 ] In a 1999 review, Belyea and Lancaster described three basic determinants of community assembly: dispersal constraints, environmental constraints, and internal dynamics. [ 17 ] They identified priority effects as a manifestation of the interaction between dispersal constraints and internal dynamics. Although early research focused on animals and aquatic systems, more recent [ when? ] studies have begun to examine terrestrial and plant-based priority effects. Most of the earliest empirical evidence for priority effects came from studies on aquatic animals. Sutherland (1974) found that final community composition varied depending on the initial order of larval recruitment in a community of small marine organisms ( sponges , tunicates , hydroids , and other species). [ 18 ] Shulman (1983) found strong priority effects among coral reef fish . [ 19 ] The study found that prior establishment by a territorial damselfish reduced establishment rates of other fish. The authors also identified cross-trophic priority effects; prior establishment by a predator fish reduced establishment rates of prey fishes . In the late 1980s, several studies examined priority effects in marine microcosms. Robinson and Dickerson (1987) found that priority effects were important in some cases, but suggested, "Being the first to invade a habitat does not guarantee success; there must be sufficient time for the early colonist to increase its population size for it to pre-empt further colonization." [ 20 ] Robinson and Edgemon (1988) later developed 54 communities of phytoplankton species by varying invasion order, rate, and timing. They found that although invasion order (priority effects) could explain a small fraction of the resulting variation in community composition, most of the variation was explained by changes in invasion rate and invasion timing. [ 21 ] These studies indicate that priority effects may not be the only or the most important historical factor affecting the trajectory of community development. In a striking example of cross-trophic priority effects, Hart (1992) found that priority effects explain the maintenance of two alternate stable states in stream ecosystems. While a macroalga is dominant in some patches, sessile grazers maintain a "lawn" of small microalgae in others. If the sessile grazers colonize a patch first, they exclude the macroalga, and vice versa. [ 22 ] In two of the most commonly cited empirical studies on priority effects, Alford and Wilbur documented inhibitory and facilitative priority effects among toad larvae in experimental ponds. [ 23 ] [ 24 ] They found that hatchlings of a toad species ( Bufo americanus ) exhibited higher growth and survivorship when introduced to a pond before those of a frog species ( Rana sphenocephala ). The frog larvae, however, did best when introduced after the toad larvae. Thus, prior establishment by the toad species facilitated the frog species, while prior establishment by the frog species inhibited the toad species. Studies on tree frogs have also documented both types of priority effects. [ 25 ] [ 26 ] Morin (1987) also observed that priority effects became less important in the presence of a predatory salamander . He hypothesized that predation mediated priority effects by reducing competition between frog species. [ 25 ] Studies on larval insects and frogs in water-filled tree holes and stumps found that abiotic factors such as space, resource availability, and toxin levels can also be important in mediating priority effects. [ 27 ] [ 28 ] Terrestrial studies on priority effects are rare, with most studies focusing on arthropods or grassland plant species. In a lab experiment, Shorrocks and Bingley (1994) showed that prior arrival increased survivorship for two species of fruit flies ; each fly species had inhibitory impacts on the other. [ 29 ] A 1996 field study on desert spiders by Ehmann and MacMahon showed that the presence of species from one spider guild reduced establishment of spiders from a different guild. [ 30 ] Palmer (2003) demonstrated that priority effects allowed a competitively subordinate ant species to avoid exclusion by a competitively dominant species. [ 31 ] If the competitively subordinate ants were able to colonize first, they altered their host tree’s morphology in ways that made it less suitable for other ant species. This study was especially important because it was able to identify a mechanism driving observed priority effects. A study on two species of introduced grasses in Hawaiian woodlands found that the species with inferior competitive abilities may be able to persist through priority effects. [ 32 ] At least three studies have come to similar conclusions about the coexistence of native and exotic grasses in California grassland ecosystems. [ 33 ] [ 34 ] [ 35 ] If given time to establish, native species can successfully inhibit the establishment of exotics . The authors of the various studies attributed the prevalence of exotic grasses in California to the low seed production and relatively poor dispersal ability of native species. Although many studies have documented priority effects, the persistence of these effects over time often remains unclear. Young (2001) indicated that both convergence (in which "communities proceed towards a pre-disturbance state regardless of historical conditions") and divergence (in which historical factors continue to affect the long-term trajectory of community development) are present in nature. [ 7 ] Among studies of priority effects, both trends seem to have been observed. [ 36 ] [ 22 ] Fukami (2005) argued that a community could be both convergent and divergent at different levels of community organization. The authors studied experimentally assembled plant communities and found that while the identities of individual species remained unique across different community replicates, species traits generally became more similar. [ 37 ] Some studies indicate that priority effects can occur across guilds [ 30 ] or trophic levels. [ 22 ] Such priority effects could have dramatic impacts on community composition and food web structure. Even intra-guild priority effects could have important consequences at multiple trophic levels if the affected species are associated with unique predator or prey species. Consider, for example, a plant species that is eaten by a host-specific herbivore . Priority effects that influence the ability of the plant species to establish would indirectly affect the establishment success of the associated herbivore. Theoretical models have described cyclical assembly dynamics in which species associated with different suites of predators can repeatedly replace one another. [ 38 ] [ 39 ] In situations where two species are introduced at the same time, spatial aggregation of a species' propagules could cause priority effects by initially reducing interspecific competition . [ 40 ] Aggregation during recruitment and establishment could allow inferior competitors to coexist with or even displace competitive dominants over the long-term. Several modelling efforts have begun to examine the implications of spatial priority effects for species coexistence. [ 29 ] [ 41 ] [ 42 ] [ 43 ] A few studies have begun to explore the mechanisms driving observed priority effects. [ 31 ] Moreover, although past studies focused on a small subset of species, recent papers indicate that priority effects may be important for a wide range of organisms, including fungi, [ 44 ] [ 45 ] birds, [ 46 ] lizards, [ 47 ] and salamanders. [ 48 ] Priority effects have important implications for ecological restoration . In many systems, information about priority effects can help practitioners identify cost-effective strategies for improving the survival and persistence of certain species, especially species of inferior competitive ability. [ 36 ] [ 49 ] [ 50 ] For example, in a study on the restoration of native Californian grasses and forbs , Lulow (2004) found that forbs could not establish in plots where bunchgrasses had been previously planted. When bunchgrasses were added to plots where forbs had already been growing for a year, forbs were able to coexist with grasses for at least 3–4 years. [ 36 ]
https://en.wikipedia.org/wiki/Priority_effect
The priority heuristic is a simple, lexicographic decision strategy that helps decide for a good option. In psychology, priority heuristics correctly predict classic violations of expected utility theory such as the Allais paradox , the four-fold pattern, the certainty effect , the possibility effect, or intransitivities. [ 1 ] The heuristic maps onto Rubinstein ’s three-step model , according to which people first check dominance and stop if it is present, otherwise they check for dissimilarity. [ 2 ] To highlight Rubinstein’s model consider the following choice problem: I: 50% chance to win 2,000 II: 52% chance to win 1,000 Dominance is absent, and while chances are similar monetary outcomes are not. Rubinstein’s model predicts that people check for dissimilarity and consequently choose Gamble I. Unfortunately, dissimilarity checks are often not decisive, and Rubinstein suggested that people proceed to a third step that he left unspecified. The priority heuristic elaborates on Rubinstein’s framework by specifying this Step 3. For illustrative purposes consider a choice between two simple gambles of the type “a chance c of winning monetary amount x ; a chance (100 - c ) of winning amount y .” A choice between two such gambles contain four reasons for choosing: the maximum gain, the minimum gain, and their respective chances; because chances are complementary, three reasons remain: the minimum gain, the chance of the minimum gain, and the maximum gain. For choices between gambles in which all outcomes are positive or 0, the priority heuristic consists of the following three steps (for all other choices see Brandstätter et al. 2006 [ 1 ] ): Priority rule: Go through reasons in the order of minimum gain, the chance of minimum gain, and maximum gain. Stopping rule: Stop examination if the minimum gains differ by 1/10 (or more) of the maximum gain; otherwise, stop examination if chances differ by 10% (or more). Decision rule: Choose the gamble with the more attractive gain (chance). The term “attractive” refers to the gamble with the higher (minimum or maximum) gain and to the lower chance of the minimum gain. Consider the following two choice problems, which were developed to support prospect theory , not the priority heuristic. [ 3 ] Problem 1 A: 80% chance to win 4,000 B: 100% chance to win 3,000 Most people chose B (80%). The priority heuristic starts by comparing the minimum gains of the Gambles A (0) and B (3,000). The difference is 3,000, which is larger than 400 (10% of the maximum gain), the examination is stopped; and the heuristic predicts that people prefer the sure gain B, which is in fact the majority choice. A Problem 2 C: 45% chance to win 6,000 D: 90% chance to win 3,000 Most people (86%) chose Gamble D. The priority heuristic starts by comparing the minimum gains (0 and 0). Because they do not differ, the probabilities (.45 and .90 or their logical complements .55 and .10) are compared. This difference is larger than 10%, examination stops and people are correctly predicted to choose D because of its higher probability of winning. The priority heuristic correctly predicted the majority choice in all (one-stage) gambles in Kahneman and Tversky (1979). Across four different data sets with a total of 260 problems, the heuristic predicted the majority choice better than (a) cumulative prospect theory , (b) two other modifications of expected utility theory , and (c) ten well-known heuristics (such as minimax or equal-weight) did. [ 1 ] However, the priority heuristic fails to predict many simple decisions (that are typically not tested in experiments) [ 4 ] and has no free parameters (which means that it cannot explain the heterogeneity of decisions between subjects), which triggered criticism, [ 5 ] [ 6 ] and countercriticism. [ 7 ] [ 8 ] [ 9 ] In production priority heuristics help optimize the execution of jobs, see scheduling .
https://en.wikipedia.org/wiki/Priority_heuristic
In real-time computing , priority inheritance is a method for eliminating unbounded priority inversion . Using this programming method, a process scheduling algorithm increases the priority of a process (A) to the maximum priority of any other process waiting for any resource on which A has a resource lock (if it is higher than the original priority of A). The basic idea of the priority inheritance protocol is that when a job blocks one or more high-priority jobs, it ignores its original priority assignment and executes its critical section at an elevated priority level. After executing its critical section and releasing its locks, the process returns to its original priority level. Consider three jobs: Suppose that both H and L require some shared resource. If L acquires this shared resource (entering a critical section), and H subsequently requires it, H will block until L releases it (leaving its critical section). Without priority inheritance, process M could preempt process L during the critical section and delay its completion, in effect causing the lower-priority process M to indirectly preempt the high-priority process H. This is a priority inversion bug. With priority inheritance, L will execute its critical section at H's high priority whenever H is blocked on the shared resource. As a result, M will be unable to preempt L and will be blocked. That is, the higher-priority job M must wait for the critical section of the lower priority job L to be executed, because L has inherited H's priority. When L exits its critical section, it regains its original (low) priority and awakens H (which was blocked by L). H, having high priority, preempts L and runs to completion. This enables M and L to resume in succession and run to completion without priority inversion. This computer science article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Priority_inheritance
Prismane or 'Ladenburg benzene' is a polycyclic hydrocarbon with the formula C 6 H 6 . It is an isomer of benzene , specifically a valence isomer . Prismane is far less stable than benzene. The carbon (and hydrogen) atoms of the prismane molecule are arranged in the shape of a six-atom triangular prism —this compound is the parent and simplest member of the prismanes class of molecules. Albert Ladenburg proposed this structure for the compound now known as benzene . [ 2 ] The compound was not synthesized until 1973. [ 3 ] In the mid 19th century, investigators proposed several possible structures for benzene which were consistent with its empirical formula, C 6 H 6 , which had been determined by combustion analysis . The first, which was proposed by Friedrich August Kekulé von Stradonitz in 1865, later proved to be closest to the true structure of benzene. This structure inspired several others to draw structures that were consistent with benzene's empirical formula; for example, Albert Ladenburg proposed prismane, James Dewar proposed Dewar benzene , and Koerner and Claus proposed Claus' benzene . Some of these structures would be synthesized in the following years. Prismane, like the other proposed structures for benzene, is still often cited in the literature, because it is part of the historical struggle toward understanding the mesomeric structures and resonance of benzene. Some computational chemists still research the differences between the possible isomers of C 6 H 6 . [ 4 ] Prismane is a colourless liquid at room temperature. The deviation of the carbon-carbon bond angle from 109° to 60° in a triangle leads to a high ring strain , reminiscent of that of cyclopropane but greater. The compound is explosive, which is unusual for a hydrocarbon. Due to this ring strain, the bonds have a low bond energy and break at a low activation energy , which makes synthesis of the molecule difficult; Woodward and Hoffmann noted that prismane's thermal rearrangement to benzene is symmetry-forbidden , comparing it to "an angry tiger unable to break out of a paper cage." On account of its strain energy and the aromatic stabilization of benzene, the molecule is estimated to be 90 kcal/mole less stable than benzene, but the activation of this highly exothermic transformation is a surprisingly high 33 kcal/mol, making it persistent at room temperature. [ 5 ] The substituted derivative hexamethylprismane (in which all six hydrogens are substituted by methyl groups) has a higher stability, and was synthesized by rearrangement reactions in 1966. [ 6 ] The synthesis starts from benzvalene ( 1 ) and 4-phenyltriazolidone ( 2 ), which is a strong dienophile . The reaction is a stepwise Diels-Alder like reaction , forming a carbocation as intermediate. The adduct ( 3 ) is then hydrolyzed under basic conditions and afterwards transformed into a copper(II) chloride derivative with acidic copper(II) chloride. Neutralized with a strong base, the azo compound ( 5 ) could be crystallized with 65% yield. The last step is a photolysis of the azo compound. This photolysis leads to a biradical which forms prismane ( 6 ) and nitrogen with a yield of less than 10%. The compound was isolated by preparative gas chromatography .
https://en.wikipedia.org/wiki/Prismane
The prismanes are a class of hydrocarbon compounds consisting of prism -like polyhedra of various numbers of sides on the polygonal base. Chemically, it is a series of fused cyclobutane rings (a ladderane , with all- cis /all- syn geometry) that wraps around to join its ends and form a band, with cycloalkane edges. Their chemical formula is (C 2 H 2 ) n , where n is the number of cyclobutane sides (the size of the cycloalkane base), and that number also forms the basis for a system of nomenclature within this class. The first few chemicals in this class are: Triprismane, tetraprismane, and pentaprismane have been synthesized and studied experimentally, and many higher members of the series have been studied using computer models . The first several members do indeed have the geometry of a regular prism, with flat n -gon bases. As n becomes increasingly large, however, modeling experiments find that highly symmetric geometry is no longer stable, and the molecule distorts into less-symmetric forms. One series of modelling experiments found that starting with [11]prismane, the regular-prism form is not a stable geometry. For example, the structure of [12]prismane would have the cyclobutane chain twisted, with the dodecagonal bases non-planar and non-parallel. [ 2 ] [ 3 ] [ 4 ] For large base-sizes, some of the cyclobutanes can be fused anti to each other, giving a non-convex polygon base. These are geometric isomers of the prismanes. Two isomers of [12]prismane that have been studied computationally are named helvetane and israelane, based on the star-like shapes of the rings that form their bases. [ 5 ] This was explored computationally after originally being proposed as an April fools joke. Their names refer to the shapes found on the flags of Switzerland and Israel , respectively. The polyprismanes consist of multiple prismanes stacked base-to-base. [ 6 ] The carbons at each intermediate level—the n -gon bases where the prismanes fuse to each other—have no hydrogen atoms attached to them. The asteranes contain a methylene group bridge on each edge between the two n -gon bases. Each side is thus a cyclohexane rather than a cyclobutane.
https://en.wikipedia.org/wiki/Prismanes
A prismatic compass is a navigation and surveying instrument which is extensively used to find out the bearing of the traversing and included angles between them, waypoints (an endpoint of the course) and direction. [ 1 ] Compass surveying is a type of surveying in which the directions of surveying lines are determined with a magnetic compass, and the length of the surveying lines are measured with a tape or chain or laser range finder . [ 2 ] The compass is generally used to run a traverse line. The compass calculates bearings of lines with respect to magnetic needle. The included angles can then be calculated using suitable formulas in case of clockwise and anti-clockwise traverse respectively. For each survey line in the traverse, surveyors take two bearings that is fore bearing and back bearing which should exactly differ by 180 ° if local attraction is negligible. The name Prismatic compass is given to it because it essentially consists of a prism which is used for taking observations more accurately. [ 3 ] Least count means the minimum value that an instrument can read which is 30 minutes in case of prismatic compass. It means compass can read only those observations which are multiples of 30 minutes, 5 ° 30 ' , 16 ° 00 ' , 35 ° 30 ' , for example. [ 4 ] The compass calculates the bearings in whole circle bearing system which determines the angle which the survey line makes with the magnetic north in the clockwise direction. The whole circle bearing system also known as the azimuthal system varies from 0 degrees to 360 degrees in the clockwise direction. [ 5 ] The included angles can be calculated by the formulas F-P ±180 in case of anti-clockwise traverse and P-F ±180 in case of clockwise traverse, where 'F' is the fore bearing of forward line in the direction of survey work and 'P' is the fore bearing of previous line. [ 4 ]
https://en.wikipedia.org/wiki/Prismatic_compass
In solid geometry , a prismatic surface is a polyhedral surface generated by all the lines that are parallel to a given line and that intersect a polygonal chain in a plane that is not parallel to the given line. [ 1 ] The polygonal chain is the directrix of the surface; the parallel lines are its generators (or elements ). If the directrix is a convex polygon , then the surface is a closed prismatic surface . The part of a closed prismatic surface between two parallel copies of the directrix is a prism . [ 2 ] This geometry-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Prismatic_surface
Privacy by design is an approach to systems engineering initially developed by Ann Cavoukian and formalized in a joint report on privacy-enhancing technologies by a joint team of the Information and Privacy Commissioner of Ontario (Canada), the Dutch Data Protection Authority , and the Netherlands Organisation for Applied Scientific Research in 1995. [ 1 ] [ 2 ] The privacy by design framework was published in 2009 [ 3 ] and adopted by the International Assembly of Privacy Commissioners and Data Protection Authorities in 2010. [ 4 ] Privacy by design calls for privacy to be taken into account throughout the whole engineering process. The concept is an example of value sensitive design , i.e., taking human values into account in a well-defined manner throughout the process. [ 5 ] [ 6 ] Cavoukian's approach to privacy has been criticized as being vague, [ 7 ] challenging to enforce its adoption, [ 8 ] difficult to apply to certain disciplines, [ 9 ] [ 10 ] challenging to scale up to networked infrastructures, [ 10 ] as well as prioritizing corporate interests over consumers' interests [ 7 ] and placing insufficient emphasis on minimizing data collection. [ 9 ] Recent developments in computer science and data engineering, such as support for encoding privacy in data [ 11 ] and the availability and quality of Privacy-Enhancing Technologies (PET's) partly offset those critiques and help to make the principles feasible in real-world settings. The European GDPR regulation incorporates privacy by design. [ 12 ] The privacy by design framework was developed by Ann Cavoukian , Information and Privacy Commissioner of Ontario , following her joint work with the Dutch Data Protection Authority and the Netherlands Organisation for Applied Scientific Research in 1995. [ 1 ] [ 12 ] In 2009, the Information and Privacy Commissioner of Ontario co-hosted an event, Privacy by Design: The Definitive Workshop , with the Israeli Law, Information and Technology Authority at the 31st International Conference of Data Protection and Privacy Commissioner (2009). [ 13 ] [ 14 ] In 2010 the framework achieved international acceptance when the International Assembly of Privacy Commissioners and Data Protection Authorities unanimously passed a resolution on privacy by design [ 15 ] recognising it as an international standard at their annual conference. [ 14 ] [ 16 ] [ 17 ] [ 4 ] Among other commitments, the commissioners resolved to promote privacy by design as widely as possible and foster the incorporation of the principle into policy and legislation. [ 4 ] Privacy by design is based on seven "foundational principles": [ 3 ] [ 18 ] [ 19 ] [ 20 ] The principles have been cited in over five hundred articles [ 21 ] referring to the Privacy by Design in Law, Policy and Practice white paper by Ann Cavoukian . [ 22 ] The privacy by design approach is characterized by proactive rather than reactive measures. It anticipates and prevents privacy invasive events before they happen. Privacy by design does not wait for privacy risks to materialize, nor does it offer remedies for resolving privacy infractions once they have occurred — it aims to prevent them from occurring. In short, privacy by design comes before-the-fact, not after. [ 18 ] [ 19 ] [ 20 ] Privacy by design seeks to deliver the maximum degree of privacy by ensuring that personal data are automatically protected in any given IT system or business practice. If an individual does nothing, their privacy still remains intact. No action is required on the part of the individual to protect their privacy — it is built into the system, by default. [ 18 ] [ 19 ] [ 20 ] Privacy by design is embedded into the design and architecture of IT systems as well as business practices. It is not bolted on as an add-on, after the fact. The result is that privacy becomes an essential component of the core functionality being delivered. Privacy is integral to the system without diminishing functionality. [ 18 ] [ 19 ] [ 20 ] Privacy by design seeks to accommodate all legitimate interests and objectives in a positive-sum “win-win” manner, not through a dated, zero-sum approach, where unnecessary trade-offs are made. Privacy by design avoids the pretense of false dichotomies, such as privacy versus security, demonstrating that it is possible to have both. [ 18 ] [ 19 ] [ 20 ] Privacy by design, having been embedded into the system prior to the first element of information being collected, extends securely throughout the entire lifecycle of the data involved — strong security measures are essential to privacy, from start to finish. This ensures that all data are securely retained, and then securely destroyed at the end of the process, in a timely fashion. Thus, privacy by design ensures cradle-to-grave, secure lifecycle management of information, end-to-end. [ 18 ] [ 19 ] [ 20 ] Privacy by design seeks to assure all stakeholders that whatever business practice or technology involved is in fact operating according to the stated promises and objectives, subject to independent verification. The component parts and operations remain visible and transparent, to users and providers alike. Remember to trust but verify. [ 18 ] [ 19 ] [ 20 ] Above all, privacy by design requires architects and operators to keep the interests of the individual uppermost by offering such measures as strong privacy defaults, appropriate notice, and empowering user-friendly options. Keep it user-centric. [ 18 ] [ 19 ] [ 20 ] The International Organization for Standardization (ISO) approved the Committee on Consumer Policy (COPOLCO) proposal for a new ISO standard: Consumer Protection: Privacy by Design for Consumer Goods and Services (ISO/PC317). [ 23 ] The standard will aim to specify the design process to provide consumer goods and services that meet consumers’ domestic processing privacy needs as well as the personal privacy requirements of data protection . The standard has the UK as secretariat with thirteen participating members [ 24 ] and twenty observing members. [ 24 ] The Standards Council of Canada (SCC) is one of the participating members and has established a mirror Canadian committee to ISO/PC317. [ 25 ] The OASIS Privacy by Design Documentation for Software Engineers (PbD-SE) [ 26 ] Technical Committee provides a specification to operationalize privacy by design in the context of software engineering. Privacy by design, like security by design, is a normal part of the software development process and a risk reduction strategy for software engineers. The PbD-SE specification translates the PbD principles to conformance requirements within software engineering tasks and helps software development teams to produce artifacts as evidence of PbD principle adherence. Following the specification facilitates the documentation of privacy requirements from software conception to retirement, thereby providing a plan around adherence to privacy by design principles, and other guidance to privacy best practices, such as NIST's 800-53 Appendix J (NIST SP 800–53) and the Fair Information Practice Principles (FIPPs) (PMRM-1.0). [ 26 ] Privacy by design originated from privacy-enhancing technologies (PETs) in a joint 1995 report by Ann Cavoukian and John Borking. [ 1 ] In 2007 the European Commission provided a memo on PETs. [ 27 ] In 2008 the British Information Commissioner's Office commissioned a report titled Privacy by Design – An Overview of Privacy Enhancing Technologies . [ 28 ] There are many facets to privacy by design. There is the technical side like software and systems engineering, [ 29 ] administrative elements (e.g. legal, policy, procedural), other organizational controls, and operating contexts. Privacy by design evolved from early efforts to express fair information practice principles directly into the design and operation of information and communications technologies. [ 30 ] In his publication Privacy by Design: Delivering the Promises [ 2 ] Peter Hustinx acknowledges the key role played by Ann Cavoukian and John Borking, then Deputy Privacy Commissioners, in the joint 1995 publication Privacy-Enhancing Technologies: The Path to Anonymity . [ 1 ] This 1995 report focussed on exploring technologies that permit transactions to be conducted anonymously. Privacy-enhancing technologies allow online users to protect the privacy of their Personally Identifiable Information (PII) provided to and handled by services or applications. Privacy by design evolved to consider the broader systems and processes in which PETs were embedded and operated. The U.S. Center for Democracy & Technology (CDT) in The Role of Privacy by Design in Protecting Consumer Privacy [ 31 ] distinguishes PET from privacy by design noting that “PETs are most useful for users who already understand online privacy risks. They are essential user empowerment tools, but they form only a single piece of a broader framework that should be considered when discussing how technology can be used in the service of protecting privacy.” [ 31 ] Germany released a statute (§ 3 Sec. 4 Teledienstedatenschutzgesetz [Teleservices Data Protection Act]) back in July 1997. [ 32 ] The new EU General Data Protection Regulation (GDPR) includes ‘data protection by design’ and ‘data protection by default’, [ 33 ] [ 34 ] [ 12 ] the second foundational principle of privacy by design. Canada's Privacy Commissioner included privacy by design in its report on Privacy, Trust and Innovation – Building Canada’s Digital Advantage . [ 35 ] [ 36 ] In 2012, U.S. Federal Trade Commission (FTC) recognized privacy by design as one of its three recommended practices for protecting online privacy in its report entitled Protecting Consumer Privacy in an Era of Rapid Change , [ 37 ] and the FTC included privacy by design as one of the key pillars in its Final Commissioner Report on Protecting Consumer Privacy . [ 38 ] In Australia, the Commissioner for Privacy and Data Protection for the State of Victoria (CPDP) has formally adopted privacy by design as a core policy to underpin information privacy management in the Victorian public sector. [ 39 ] The UK Information Commissioner's Office website highlights privacy by design [ 40 ] and data protection by design and default. [ 41 ] In October 2014, the Mauritius Declaration on the Internet of Things was made at the 36th International Conference of Data Protection and Privacy Commissioners and included privacy by design and default. [ 42 ] The Privacy Commissioner for Personal Data , Hong Kong held an educational conference on the importance of privacy by design. [ 43 ] [ 44 ] In the private sector, Sidewalk Toronto commits to privacy by design principles; [ 45 ] Brendon Lynch, Chief Privacy Officer at Microsoft , wrote an article called Privacy by Design at Microsoft ; [ 46 ] whilst Deloitte relates certifiably trustworthy to privacy by design. [ 47 ] The privacy by design framework attracted academic debate, particularly following the 2010 International Data Commissioners resolution that provided criticism of privacy by design with suggestions by legal and engineering experts to better understand how to apply the framework into various contexts. [ 7 ] [ 9 ] [ 8 ] Privacy by design has been critiqued as "vague" [ 7 ] and leaving "many open questions about their application when engineering systems." Suggestions have been made to instead start with and focus on minimizing data, which can be done through security engineering. [ 9 ] In 2007, researchers at K.U. Leuven published Engineering Privacy by Design noting that “The design and implementation of privacy requirements in systems is a difficult problem and requires translation of complex social, legal and ethical concerns into systems requirements”. The principles of privacy by design "remain vague and leave many open questions about their application when engineering systems". The authors argue that "starting from data minimization is a necessary and foundational first step to engineer systems in line with the principles of privacy by design". The objective of their paper is to provide an "initial inquiry into the practice of privacy by design from an engineering perspective in order to contribute to the closing of the gap between policymakers’ and engineers’ understanding of privacy by design." [ 9 ] Extended peer consultations performed 10 years later in an EU project however confirmed persistent difficulties in translating legal principles into engineering requirements. This is partly a more structural problem due to the fact that legal principles are abstract, open-ended with different possible interpretations and exceptions, whereas engineering practices require unambiguous meanings and formal definitions of design concepts. [ 10 ] In 2011, the Danish National It and Telecom Agency published a discussion paper in which they argued that privacy by design is a key goal for creating digital security models, by extending the concept to "Security by Design". The objective is to balance anonymity and surveillance by eliminating identification as much as possible. [ 48 ] Another criticism is that current definitions of privacy by design do not address the methodological aspect of systems engineering, such as using decent system engineering methods, e.g. those which cover the complete system and data life cycle. [ 7 ] This problem is further exacerbated in the move to networked digital infrastructures initiatives such as the smart city or the Internet of Things . Whereas privacy by design has mainly been focused on the responsibilities of singular organisations for a certain technology, these initiatives often require the interoperability of many different technologies operated by different organisations. This requires a shift from organisational to infrastructural design. [ 10 ] The concept of privacy by design also does not focus on the role of the actual data holder but on that of the system designer. This role is not known in privacy law, so the concept of privacy by design is not based on law. This, in turn, undermines the trust by data subjects, data holders and policy-makers. [ 7 ] Questions have been raised from science and technology studies of whether privacy by design will change the meaning and practice of rights through implementation in technologies, organizations, standards and infrastructures. [ 49 ] From a civil society perspective, some have even raised the possibility that a bad use of these design-based approaches can even lead to the danger of bluewashing . This refers to the minimal instrumental use by organizations of privacy design without adequate checks, in order to portray themselves as more privacy-friendly than is factually justified. [ 10 ] It has also been pointed out that privacy by design is similar to voluntary compliance schemes in industries impacting the environment, and thus lacks the teeth necessary to be effective, and may differ per company. In addition, the evolutionary approach currently taken to the development of the concept will come at the cost of privacy infringements because evolution implies also letting unfit phenotypes (privacy-invading products) live until they are proven unfit. [ 7 ] Some critics have pointed out that certain business models are built around customer surveillance and data manipulation and therefore voluntary compliance is unlikely. [ 8 ] In 2013, Rubinstein and Good used Google and Facebook privacy incidents to conduct a counterfactual analysis in order to identify lessons learned of value for regulators when recommending privacy by design. The first was that “more detailed principles and specific examples” would be more helpful to companies. The second is that “usability is just as important as engineering principles and practices”. The third is that there needs to be more work on “refining and elaborating on design principles–both in privacy engineering and usability design”. including efforts to define international privacy standards. The final lesson learned is that “regulators must do more than merely recommend the adoption and implementation of privacy by design.” [ 8 ] The advent of GDPR with its maximum fine of 4% of global turnover now provides a balance between business benefit and turnover and addresses the voluntary compliance criticism and requirement from Rubinstein and Good that “regulators must do more than merely recommend the adoption and implementation of privacy by design”. [ 8 ] Rubinstein and Good also highlighted that privacy by design could result in applications that exemplified Privacy by Design and their work was well received. [ 50 ] [ 8 ] The May 2018 European Data Protection Supervisor Giovanni Buttarelli 's paper Preliminary Opinion on Privacy by Design states, "While privacy by design has made significant progress in legal, technological and conceptual development, it is still far from unfolding its full potential for the protection of the fundamental rights of individuals. The following sections of this opinion provide an overview of relevant developments and recommend further efforts". [ 12 ] The executive summary makes the following recommendations to EU institutions: The EDPS will: The European Data Protection Supervisor Giovanni Buttarelli set out the requirement to implement privacy by design in his article. [ 51 ] The European Union Agency for Network and Information Security (ENISA) provided a detailed report Privacy and Data Protection by Design – From Policy to Engineering on implementation. [ 52 ] The Summer School on real-world crypto and privacy provided a tutorial on "Engineering Privacy by Design". [ 53 ] The OWASP Top 10 Privacy Risks Project for web applications that gives hints on how to implement privacy by design in practice. The OASIS Privacy by Design Documentation for Software Engineers (PbD-SE) [ 26 ] offers a privacy extension/complement to OMG's Unified Modeling Language (UML) and serves as a complement to OASIS’ eXtensible Access Control Mark-up Language (XACML) and Privacy Management Reference Model (PMRM). Privacy by Design guidelines are developed to operationalise some of the high-level privacy-preserving ideas into more granular actionable advice., [ 54 ] [ 55 ] such as recommendations on how to implement privacy by design into existing (data) systems . However, still the applications of privacy by design guidelines by software developers remains a challenge. [ 56 ]
https://en.wikipedia.org/wiki/Privacy_by_default
Meta Platforms Inc. , or Meta for short (formerly known as Facebook ), has faced a number of privacy concerns. These stem partly from the company's revenue model that involves selling information collected about its users for many things including advertisement targeting. Meta Platforms Inc. has also been a part of many data breaches that have occurred within the company. These issues and others are further described including user data concerns, vulnerabilities in the company's platform, investigations by pressure groups and government agencies, and even issues with students. In addition, employers and other organizations/individuals have been known to use Meta Platforms Inc. for their own purposes. As a result, individuals’ identities and private information have sometimes been compromised without their permission. In response to these growing privacy concerns, some pressure groups and government agencies have increasingly asserted the users’ right to privacy and to be able to control their personal data. In September 2024, the Federal Trade Commission released a report summarizing 9 company responses (including from Facebook) to orders made by the agency pursuant to Section 6(b) of the Federal Trade Commission Act of 1914 to provide information about user and non-user data collection (including of children and teenagers) and data use by the companies that found that the companies' user and non-user data practices put individuals vulnerable to identity theft , stalking , unlawful discrimination, emotional distress and mental health issues , social stigma, and reputational harm. [ 1 ] [ 2 ] [ 3 ] In 2010 the Electronic Frontier Foundation identified two personal information aggregation techniques called "connections" and "instant personalization". They demonstrated that anyone could get access to information saved to a Facebook profile, even if the information was not intended to be made public. [ 4 ] A "connection" is created when a user clicks a "Like" button for a product or service, either on Facebook itself or an external site. Facebook treats such relationships as public information, and the user's identity may be displayed on the Facebook page of the product or service. [ 4 ] Instant personalization was a pilot program that shared Facebook account information with affiliated sites, such as sharing a user's list of "liked" bands with a music website, so that when the user visits the site, their preferred music plays automatically. The EFF noted that "For users that have not opted out, Instant Personalization is instant data leakage. As soon as you visit the sites in the pilot program (Yelp, Pandora, and Microsoft Docs) the sites can access your name, your picture, your gender, your current location, your list of friends, all the Pages you have Liked—everything Facebook classifies as public information. Even if you opt-out of Instant Personalization, there's still data leakage if your friends use Instant Personalization websites—their activities can give away information about you, unless you block those applications individually." [ 4 ] On December 27, 2012 CBS News reported that Randi Zuckerberg , sister of Facebook founder Mark Zuckerberg, criticized a friend for being "way uncool" in sharing a private Facebook photo of her on Twitter, only to be told that the image had appeared on a friend-of-a-friend's Facebook news feed. Commenting on this misunderstanding of Facebook's privacy settings , Eva Galperin of the EFF said "Even Randi Zuckerberg can get it wrong. That's an illustration of how confusing they can be." [ 5 ] In August 2007 the code used to generate Facebook's home and search page as visitors browse the site was accidentally made public. [ 6 ] [ 7 ] A configuration problem on a Facebook server caused the PHP code to be displayed instead of the web page the code should have created, raising concerns about how secure private data on the site was. A visitor to the site copied, published and later removed the code from his web forum, claiming he had been served and threatened with legal notice by Facebook. [ 8 ] Facebook's response was quoted by the site that broke the story: [ 9 ] A small fraction of the code that displays Facebook web pages was exposed to a small number of users due to a single misconfigured web server that was fixed immediately. It was not a security breach and did not compromise user data in any way. Because the code that was released powers only Facebook user interface, it offers no useful insight into the inner workings of Facebook. The reprinting of this code violates several laws and we ask that people not distribute it further. In November Facebook launched Beacon , a system (discontinued in September 2009) [ 10 ] where third-party websites could include a script by Facebook on their sites, and use it to send information about the actions of Facebook users on their site to Facebook, prompting serious privacy concerns. Information such as purchases made and games played were published in the user's news feed. An informative notice about this action appeared on the third party site and allowed the user to cancel it. The user could also cancel it on Facebook. Originally if no action was taken, the information was automatically published. On November 29 this was changed to require confirmation from the user before publishing each story gathered by Beacon. On December 1 Facebook's credibility in regard to the Beacon program was further tested when it was reported that The New York Times "essentially accuses" Mark Zuckerberg of lying to the paper and leaving Coca-Cola , which is reversing course on the program, with a similar impression. [ 11 ] A security engineer at CA, Inc. also claimed in a November 29, 2007, blog post that Facebook collected data from affiliate sites even when the consumer opted out and even when not logged into the Facebook site. [ 12 ] On November 30, 2007, the CA security blog posted a Facebook clarification statement addressing the use of data collected in the Beacon program: [ 13 ] When a Facebook user takes a Beacon-enabled action on a participating site, information is sent to Facebook for Facebook to operate Beacon technologically. If a Facebook user clicks 'No, thanks' on the partner site notification, Facebook does not use the data and deletes it from its servers. Separately, before Facebook can determine whether the user is logged in, some data may be transferred from the participating site to Facebook. In those cases, Facebook does not associate the information with any individual user account, and deletes the data as well. The Beacon service ended in September 2009 along with the settlement of a class-action lawsuit against Facebook resulting from the service. [ 10 ] On September 5, 2006, Facebook introduced two new features called " News Feed " and "Mini-Feed". The first of the new features, News Feed, appears on every Facebook member's home page , displaying recent Facebook activities of the member's friends. The second feature, Mini-Feed, keeps a log of similar events on each member's profile page. [ 14 ] Members can manually delete items from their Mini-Feeds if they wish to do so, and through privacy settings can control what is actually published in their respective Mini-Feeds. Some Facebook members still feel that the ability to opt out of the entire News Feed and Mini-Feed system is necessary, as evidenced by a statement from the Students Against Facebook News Feed group, which peaked at over 740,000 members in 2006. [ 15 ] Reacting to users' concerns, Facebook developed new privacy features to give users some control over information about them that was broadcast by the News Feed. [ 16 ] According to subsequent news articles, members have widely regarded the additional privacy options as an acceptable compromise. [ 17 ] In May 2010 Facebook added privacy controls and streamlined its privacy settings, giving users more ways to manage status updates and other information broadcast to the public News Feed. [ 18 ] Among the new privacy settings is the ability to control who sees each new status update a user posts: Everyone, Friends of Friends, or Friends Only. Users can now hide each status update from specific people as well. [ 19 ] [ non-primary source needed ] However, a user who presses "like" or comments on the photo or status update of a friend cannot prevent that action from appearing in the news feeds of all the user's friends, even non-mutual ones. The "View As" option, used to show a user how privacy controls filter out what a specific given friend can see, only displays the user's timeline and gives no indication that items missing from the timeline may still be showing up in the friend's own news feed. Facebook had allowed users to deactivate their accounts but not actually remove account content from its servers. A Facebook representative explained to a student from the University of British Columbia that users had to clear their own accounts by manually deleting all of the content including wall posts, friends, and groups. The New York Times noted the issue and raised a concern that emails and other private user data remain indefinitely on Facebook's servers. [ 20 ] Facebook subsequently began allowing users to permanently delete their accounts in 2010. Facebook's Privacy Policy now states, "When you delete an account, it is permanently deleted from Facebook." [ 21 ] A notable ancillary effect of social-networking websites is the ability for participants to mourn publicly for a deceased individual. On Facebook, friends often leave messages of sadness, grief, or hope on the individual's page, transforming it into a public book of condolences. This particular phenomenon has been documented at a number of schools. [ 22 ] [ 23 ] [ 24 ] Facebook originally held a policy that profiles of people known to be deceased would be removed after 30 days due to privacy concerns. [ 25 ] Due to user response, Facebook changed its policy to place deceased members' profiles in a "memorialization state". [ 26 ] Facebook's Privacy Policy regarding memorialization says, "If we are notified that a user is deceased, we may memorialize the user's account. In such cases we restrict profile access to confirmed friends and allow friends and family to write on the user's Wall in remembrance. We may close an account if we receive a formal request from the user's next of kin or other proper legal request to do so." [ 21 ] Some of these memorial groups have also caused legal issues. Notably, on January 1, 2008, one such memorial group posted the identity of murdered Toronto teenager Stefanie Rengel, whose family had not yet given the Toronto Police Service their consent to release her name to the media, and the identities of her accused killers, in defiance of Canada's Youth Criminal Justice Act , which prohibits publishing the names of the under-age accused. [ 27 ] While police and Facebook staff attempted to comply with the privacy regulations by deleting such posts, they noted difficulty in effectively policing the individual users who repeatedly republished the deleted information. [ 28 ] In July 2007 Adrienne Felt, an undergraduate student at the University of Virginia, discovered a cross-site scripting (XSS) hole in the Facebook Platform that could inject JavaScript into profiles. She used the hole to import custom CSS and demonstrate how the platform could be used to violate privacy rules or create a worm. [ 29 ] Facebook offers privacy controls to allow users to choose who can view their posts: only friends, friends and friends of friends, everyone, custom (specific choice of which friends can see posts). While these options exist, there are still methods by which otherwise unauthorized third parties can view a post. For example, posting a picture and marking it as only viewable by friends, but tagging someone else as appearing in that picture, causes the post to be viewable by friends of the tagged person(s). [ 30 ] Photos taken of people by others can be posted on Facebook without the knowledge or consent of people appearing in the image; persons may have multiple photos which feature them on Facebook without being aware of it. A study has suggested that a photo of a person which reflects poorly on them posted online can have a more harmful effect than losing a password. [ 31 ] When commenting on a private post, the commenting user is not informed if the post they commented on is later made public – which would make their comment on said post also publicly viewable. [ 30 ] Quit Facebook Day was an online event which took place on May 31, 2010 (coinciding with Memorial Day ), in which Facebook users stated that they would quit the social network due to privacy concerns. [ 32 ] It was estimated that 2% of Facebook users coming from the United States would delete their accounts. [ 33 ] However, only 33,000 (roughly 0.0066% of its roughly 500 million members at the time) users quit the site. [ 34 ] The number one reason for users to quit Facebook was privacy concerns (48%), being followed by a general dissatisfaction with Facebook (14%), negative aspects regarding Facebook friends (13%), and the feeling of getting addicted to Facebook (6%). Facebook quitters were found to be more concerned about privacy, more addicted to the Internet, and more conscientious. [ 35 ] Facebook enabled an automatic facial recognition feature in June 2011, called "Tag Suggestions", a product of a research project named " DeepFace ". [ 36 ] The feature compares newly uploaded photographs to those of the uploader's Facebook friends, to suggest photo tags. National Journal Daily claims "Facebook is facing new scrutiny over its decision to automatically turn on a new facial recognition feature aimed at helping users identify their friends in photos". [ 37 ] Facebook has defended the feature, saying users can disable it. [ 38 ] Facebook introduced the feature on an opt-out basis. [ 39 ] European Union data-protection regulators said they would investigate the feature to see if it violated privacy rules. [ 38 ] [ 40 ] Naomi Lachance stated in a web blog for NPR, All Tech Considered , that Facebook's facial recognition is right 98% of the time compared to the FBI's 85% out of 50 people. However, the accuracy of Facebook searches is due to its larger, more diverse photo selection compared to the FBI's closed database. [ 41 ] Mark Zuckerberg showed no worries when speaking about Facebook's AIs, saying, "Unsupervised learning is a long-term focus of our AI research team at Facebook, and it remains an important challenge for the whole AI research community" and "It will save lives by diagnosing diseases and driving us around more safely. It will enable breakthroughs by helping us find new planets and understand Earth's climate. It will help in areas we haven't even thought of today". [ 42 ] In May 2016 Facebook faced a lawsuit in Illinois for violations of the Biometric Information Privacy Act . [ 43 ] In February 2021, the company settled, agreeing to pay $650 million, and shut down the feature in December 2021. [ 44 ] Following the shutdown, Cher Scarlett , a former Apple security engineer, in January 2022 tweeted a photo that she had been auto-tagged in by someone unknown to her prior to shutdown. The photo was from the 19th century and she said that she learned it was her great-great-great-grandmother of Volga German ancestry, saying the technology was "dangerous" and "off-putting", and pointed to the implication of genocide . [ 45 ] In September 2024, Meta said it scraped all Australian adult users' public photos and posts on Facebook to train its AI without an opt-out option. [ 46 ] An article published by USA Today in November 2011 claimed that Facebook creates logs of pages visited both by its members and by non-members, relying on tracking cookies to keep track of pages visited. [ 47 ] In early November 2015 Facebook was ordered by the Belgian Privacy Commissioner to cease tracking non-users, citing European laws, or risk fines of up to £250,000 per day. [ 48 ] As a result, instead of removing tracking cookies, Facebook banned non-users in Belgium from seeing any material on Facebook, including publicly posted content, unless they sign in. Facebook criticized the ruling, saying that the cookies provided better security. [ 49 ] [ 50 ] By statistics, 63% of Facebook profiles are automatically set "visible to the public", meaning anyone can access the profiles that users have updated. Facebook also has its own built-in messaging system that people can send messages to any other user, unless they have disabled the feature to "from friends only". Stalking is not only limited to SNS stalking, but can lead to further "in-person" stalking because nearly 25% of real-life stalking victims reported it started with online instant messaging (e.g., Facebook chat ). [ 51 ] In December 2018 it emerged that Facebook had, during the period 2010–2018, granted access to users' private messages, address book contents, and private posts, without the users' consent, to more than 150 third parties including Microsoft, Amazon, Yahoo, Netflix, and Spotify. This had been occurring despite public statements from Facebook that it had stopped such sharing years earlier. [ 52 ] In December 2018 it emerged that Facebook's mobile app reveals the user's location to Facebook, even if the user does not use the "check in" feature and has configured all relevant settings within the app so as to maximize location privacy. [ 53 ] In February 2019 it emerged that a number of Facebook apps, including Flo , had been sending users' health data such as blood pressure and ovulation status to Facebook without users' informed consent. [ 54 ] [ 55 ] [ 56 ] [ 57 ] New York governor Andrew Cuomo called the practice an "outrageous abuse of privacy", ordered New York's department of state and department of financial services to investigate, and encouraged federal regulators to step in. [ 58 ] Facebook's acquisition of virtual reality headset manufacturer Oculus has resulted in ongoing concerns over the integration of its hardware and software platforms with Facebook user data. After the acquisition, Oculus co-founder Palmer Luckey had assured users that "you won't need to log into your Facebook account every time you wanna use the Oculus Rift ." [ 59 ] [ 60 ] Initially the Oculus desktop software provided opt-in integration with Facebook, primarily for identifying Facebook users within their Oculus friends list. [ 61 ] In August 2020, Facebook announced that all Oculus products and services would become subject to the unified Facebook privacy policy , code of conduct , and community guidelines moving forward, and that a Facebook account would be required to use Oculus products and services beginning in October. This policy took effect beginning with the Oculus Quest 2 . [ 62 ] [ 63 ] At that time, the ability to create a standalone Oculus account was discontinued, and it was announced that these accounts were to be deprecated effective January 1, 2023. [ 63 ] The requirements, as well as Facebook's later focus on " metaverse " platforms, have led to concerns over the amount of user data that could be collected by the company via virtual reality hardware and interactions, including the user's surroundings, motions and actions, and biometrics. [ 64 ] [ 65 ] Horizon , a VR social network run as part of the Oculus platform, is subject to Facebook policies, performs "rolling" recordings of interactions that could be uploaded to Facebook servers for the purposes of moderation if users are reported, and users can be observed by moderators without their knowledge if they are reported by others, or "signals" regarding that user are raised by other users via their own actions (such as muting). [ 66 ] In September 2020 Facebook pulled all Oculus products from the German market due to concerns from local regulators over the policy's compliance with the European Union's General Data Protection Regulation (GDPR). [ 67 ] In December 2020, the German Federal Cartel Office (Bundeskartellamt) launched an antitrust investigation into Facebook's mandatory integration of its social networking platform with its virtual reality products. [ 68 ] [ 69 ] At the Facebook Connect event in October 2021 (where Facebook, Inc. announced its rebranding as Meta), Zuckerberg stated that Meta was "working on making it so you can log in into Quest with an account other than your personal Facebook account". [ 70 ] The new "Meta account" was announced in July 2022 as a de facto replacement for Oculus accounts, which will not be explicitly tied to the Facebook social network, and can be linked with other members of the Facebook "Family of Apps" (Facebook, Messenger, Instagram, and WhatsApp). It was stated that Meta Quest users would be allowed to transition to Meta accounts and decouple their Facebook logins from its VR platforms. Ars Technica noted that the new terms of service and privacy policies associated with Meta account system could allow enforcement of a real name policy (stating that users would be obligated to provide "accurate and up to date information (including registration information), which may include providing personal data", and still allowed for "rampant" use of user data by Meta, especially if linked with other Facebook apps. [ 71 ] Personal information of 533 million Facebook users, including names, phone numbers, email addresses, and other user profile data, was posted to a hacking forum in April, 2021. This information had been previously leaked through a feature allowing users to find each other by phone number, which Facebook fixed to prevent this abuse in September 2019. The company decided not to notify users of the data breach. [ 72 ] The Irish Data Protection Commission, which has jurisdiction over Facebook due to the location of its EU headquarters, then opened an investigation into the breach as a possible violation of GDPR. [ 73 ] There have been allegations by some users that Facebook's mobile app is capable of listening to conversations without consent, citing instances of the service displaying advertisements for products that they had only spoken about, and had otherwise had no prior interactions with. In August 2019, Facebook admitted that it had been sending anonymized voice data from the Messenger app to third-party contractors for human review to improve the quality of its automatic transcription function, but denied that this data was being used for personalized advertising. The company also stated that it had recently suspended human reviews after scrutiny over Amazon, Apple, and Google's use of similar practices for their voice assistant platforms. [ 74 ] [ 75 ] There have been some concerns expressed regarding the use of Facebook as a means of surveillance and data mining . Two Massachusetts Institute of Technology (MIT) students used an automated script to download the publicly posted information of over 70,000 Facebook profiles from four schools (MIT, NYU , the University of Oklahoma , and Harvard University ) as part of a research project on Facebook privacy published on December 14, 2005. [ 76 ] Since then, Facebook has bolstered security protection for users, responding: "We've built numerous defenses to combat phishing and malware, including complex automated systems that work behind the scenes to detect and flag Facebook accounts that are likely to be compromised (based on anomalous activity like lots of messages sent in a short period of time, or messages with links that are known to be bad)." [ 77 ] A second clause that brought criticism from some users allowed Facebook the right to sell users' data to private companies, stating "We may share your information with third parties, including responsible companies with which we have a relationship." This concern was addressed by spokesman Chris Hughes, who said, "Simply put, we have never provided our users' information to third party companies, nor do we intend to." [ 78 ] Facebook eventually removed this clause from its privacy policy. [ 21 ] [ non-primary source needed ] In the United Kingdom the Trades Union Congress (TUC) has encouraged employers to allow their staff to access Facebook and other social-networking sites from work, provided they proceed with caution. [ 79 ] In September 2007 Facebook drew criticism after it began allowing search engines to index profile pages, though Facebook's privacy settings allow users to turn this off. [ 80 ] Concerns were also raised on the BBC's Watchdog program in October 2007 when Facebook was shown to be an easy way to collect an individual's personal information to facilitate identity theft . [ 81 ] However, there is barely any personal information presented to non-friends – if users leave the privacy controls on their default settings, the only personal information visible to a non-friend is the user's name, gender, profile picture and networks. [ 82 ] [ non-primary source needed ] An article in The New York Times in February 2008 pointed out that Facebook does not actually provide a mechanism for users to close their accounts, and raised the concern that private user data would remain indefinitely on Facebook's servers. [ 20 ] As of 2013 [update] , Facebook gives users the options to deactivate or delete their accounts. Deactivating an account allows it to be restored later, while deleting it will remove the account "permanently", although some data submitted by that account ("like posting to a group or sending someone a message") will remain. [ 83 ] [ non-primary source needed ] In 2013 Facebook acquired Onavo , a developer of mobile utility apps such as Onavo Protect VPN , which is used as part of an "Insights" platform to gauge the use and market share of apps. [ 84 ] This data has since been used to influence acquisitions and other business decisions regarding Facebook products. [ 85 ] [ 86 ] [ 87 ] Criticism of this practice emerged in 2018, when Facebook began to advertise the Onavo Protect VPN within its main app on iOS devices in the United States. Media outlets considered the app to effectively be spyware due to its behavior, adding that the app's listings did not readily disclaim Facebook's ownership of the app and its data collection practices. [ 88 ] [ 89 ] Facebook subsequently pulled the iOS version of the app, citing new iOS App Store policies forbidding apps from performing analytics on the usage of other apps on a user's device. [ 90 ] [ 91 ] [ 92 ] [ 93 ] Since 2016 Facebook has also run "Project Atlas"—publicly known as "Facebook Research"—a market research program inviting teenagers and young adults between the ages of 13 and 35 to have data such as their app usage, web browsing history , web search history, location history , personal messages , photos, videos, emails, and Amazon order history, analyzed by Facebook. Participants would receive up to $20 per-month for participating in the program. Facebook Research is administered by third-party beta testing services, including Applause, and requires users to install a Facebook root certificate on their phone. After a January 2019 report by TechCrunch on Project Atlas, which alleged that Facebook bypassed the App Store by using an Apple enterprise program for apps used internally by a company's employees, Facebook refuted the article but later announced its discontinuation of the program on iOS. [ 94 ] [ 95 ] On January 30, 2019 Apple temporarily revoked Facebook's Enterprise Developer Program certificates for one day, which caused all of the company's internal iOS apps to become inoperable. [ 96 ] [ 97 ] [ 98 ] Apple stated that "Facebook has been using their membership to distribute a data-collecting app to consumers, which is a clear breach of their agreement with Apple", and that the certificates were revoked "to protect our users and their data". [ 96 ] US Senators Mark Warner , Richard Blumenthal , and Ed Markey separately criticized Facebook Research's targeting of teenagers, and promised to sponsor legislation to regulate market research programs. [ 99 ] [ 100 ] In 2010 the Wall Street Journal found that many of Facebook's top-rated apps—including apps from Zynga and Lolapps —were transmitting identifying information to "dozens of advertising and Internet tracking companies" like RapLeaf . The apps used an HTTP referer that exposed the user's identity and sometimes their friends' identities. Facebook said that "While knowledge of user ID does not permit access to anyone’s private information on Facebook, we plan to introduce new technical systems that will dramatically limit the sharing of User ID’s". A blog post by a member of Facebook's team further stated that "press reports have exaggerated the implications of sharing a user ID", though still acknowledging that some of the apps were passing the ID in a manner that violated Facebook's policies. [ 101 ] [ 102 ] In 2010 Canadian security consultant Ron Bowes of Skull Security created a BitTorrent download consisting of the names of about 100 million Facebook users. Facebook likened the information to what is listed in a phone book. It included some who had opted not to be found by search engines, and some who did not realize their information was public. Bowes created the list to get statistical information about user names, which can be used in both penetration testing and computer break-ins . [ 103 ] [ 104 ] In 2009 and 2010 the fact that Facebook was not requiring connections to use HTTPS other than at login meant that a routing glitch at AT&T caused cookie to end up on the wrong users' phones. This resulted in some Facebook users having continuous access to another person's account instead of their own. [ 105 ] In 2018 Facebook admitted [ 106 ] [ 107 ] that an app made by Global Science Research and Alexandr Kogan, related to Cambridge Analytica , was able in 2014 [ 108 ] to harvest personal data of up to 87 million Facebook users without their consent, by exploiting their friendship connection to the users who sold their data via the app. [ 109 ] Following the revelations of the breach, several public figures, including industrialist Elon Musk and WhatsApp cofounder Brian Acton , announced that they were deleting their Facebook accounts, using the hashtag "#deletefacebook". [ 110 ] [ 111 ] [ 112 ] Facebook was also criticized for allowing the 2012 Barack Obama presidential campaign to analyze and target select users by providing the campaign with friendship connections of users who signed up for an application. However, users signing up for the application were aware that their data, but not the data of their friends, was going to a political party. [ 113 ] [ 114 ] [ 115 ] [ 116 ] [ 117 ] In September 2018 a software bug meant that photos that had been uploaded to Facebook accounts, but that had not been "published" (and which therefore should have remained private between the user and Facebook), were exposed to app developers. [ 118 ] Approximately 6.8 million users and 1500 third-party apps were affected. [ 118 ] In March 2019 Facebook admitted that it had mistakenly stored "hundreds of millions" of passwords of Facebook and Instagram users in plaintext (as opposed to being hashed and salted ) on multiple internal systems accessible only to Facebook engineers, dating as far back as 2012. Facebook stated that affected users would be notified, but that there was no evidence that this data had been abused or leaked. [ 119 ] [ 120 ] In April 2019 Facebook admitted that its subsidiary Instagram also stored millions of unencrypted passwords. [ 121 ] Facebook has denied for years that it listens to conversations and in turn releases ads based on them, however Facebook has been shown to have lied about their policies in the past. [ 122 ] In 2016, Facebook stated "Facebook does not use your phone's microphone to inform ads or to change what you see in News Feed." a spokeswoman said, "some recent articles have suggested that we must be listening to people's conversations in order to show them relevant ads. This is not true. We show ads based on people's interests and other profile information, not what you’re talking out loud about." [ 123 ] Government and local authorities rely on Facebook and other social networks to investigate crimes and obtain evidence to help establish a crime, provide location information, establish motives, prove and disprove alibis, and reveal communications. [ 124 ] Federal, state, and local investigations have not been restricted to profiles that are publicly available or willingly provided to the government; Facebook has willingly provided information in response to government subpoenas or requests, except with regard to private, unopened inbox messages less than 181 days old, which would require a warrant and a finding of probable cause under federal law under Electronic Communications Privacy Act (ECPA). One 2011 article noted that "even when the government lacks reasonable suspicion of criminal activity and the user opts for the strictest privacy controls, Facebook users still cannot expect federal law to stop their 'private' content and communications from being used against them". [ 125 ] Facebook's privacy policy states that "We may also share information when we have a good faith belief it is necessary to prevent fraud or other illegal activity, to prevent imminent bodily harm, or to protect ourselves and you from people violating our Statement of Rights and Responsibilities. This may include sharing information with other companies, lawyers, courts or other government entities". [ 125 ] Since the U.S. Congress has failed to meaningfully amend the ECPA to protect most communications on social-networking sites such as Facebook, and since the U.S. Supreme Court has largely refused to recognize a Fourth Amendment privacy right to information shared with a third party, no federal statutory or constitutional right prevents the government from issuing requests that amount to fishing expeditions and there is no Facebook privacy policy that forbids the company from handing over private user information that suggests any illegal activity. [ 125 ] The 2013 mass surveillance disclosures identified Facebook as a participant in the U.S. National Security Administration 's PRISM program . Facebook now reports the number of requests it receives for user information from governments around the world. [ 126 ] In 2022 Nebraska police charged a teenage girl and her mother after obtaining Facebook messages which allegedly showed that they performed an illegal self-managed medication abortion. [ 127 ] On May 31, 2008 the Canadian Internet Policy and Public Interest Clinic (CIPPIC), per Director Phillipa Lawson, filed a 35-page complaint with the Office of the Privacy Commissioner against Facebook based on 22 breaches of the Canadian Personal Information Protection and Electronic Documents Act (PIPEDA). University of Ottawa law students Lisa Feinberg, Harley Finkelstein , and Jordan Eric Plener, initiated the "minefield of privacy invasion" suit. Facebook's Chris Kelly contradicted the claims, saying that: "We've reviewed the complaint and found it has serious factual errors—most notably its neglect of the fact that almost all Facebook data is willingly shared by users." [ 128 ] Assistant Privacy Commissioner Elizabeth Denham released a report of her findings on July 16, 2009. [ 129 ] In it, she found that several of CIPPIC's complaints were well-founded. Facebook agreed to comply with some, but not all, of her recommendations. [ 129 ] The Assistant Commissioner found that Facebook did not do enough to ensure users granted meaningful consent for the disclosure of personal information to third parties and did not place adequate safeguards to prevent unauthorized access by third-party developers to personal information. [ 129 ] In August 2011 the Irish Data Protection Commissioner (DPC) started an investigation after receiving 22 complaints by europe-v-facebook.org, which was founded by a group of Austrian students. [ 130 ] The DPC stated in first reactions that the Irish DPC is legally responsible for privacy on Facebook for all users within the European Union [ 131 ] and that he will "investigate the complaints using his full legal powers if necessary". [ 132 ] The complaints were filed in Ireland because all users who are not residents of the United States or Canada have a contract with "Facebook Ireland Ltd", located in Dublin , Ireland. Under European law Facebook Ireland is the "data controller" for facebook.com, and therefore, facebook.com is governed by European data protection laws. [ 131 ] Facebook Ireland Ltd. was established by Facebook Inc. to avoid US taxes (see Double Irish arrangement ). [ 133 ] The group 'europe-v-facebook.org' made access requests at Facebook Ireland and received up to 1,222 pages of data per person in 57 data categories that Facebook was holding about them, [ 134 ] including data that was previously removed by the users. [ 135 ] The group claimed that Facebook failed to provide some of the requested data, including "likes", facial recognition data, data about third party websites that use "social plugins" visited by users, and information about uploaded videos. Currently the group claims that Facebook holds at least 84 data categories about every user. [ 136 ] The first 16 complaints target different problems, from undeleted old "pokes" all the way to the question if sharing and new functions on Facebook should be opt-in or opt-out. [ 137 ] The second wave of 6 more complaints was targeting more issues including one against the "Like" button. [ 138 ] The most severe could be a complaint that claims that the privacy policy, and the consent to the privacy policy is void under European laws. In an interview with the Irish Independent , a spokesperson said that the DPC will "go and audit Facebook, go into the premises and go through in great detail every aspect of security". He continued by saying: "It's a very significant, detailed and intense undertaking that will stretch over four or five days." In December 2011 the DPC published its first report on Facebook. This report was not legally binding but suggested changes that Facebook should undertake until July 2012. The DPC is planning to do a review about Facebook's progress in July 2012. [ needs update ] In spring 2012 Facebook had to undertake many changes (e.g., having an extended download tool that should allow users to exercise the European right to access all stored information or an update of the worldwide privacy policy ). [ 139 ] These changes were seen as not sufficient to comply with European law by europe-v-facebook.org. The download tool does not allow, for example, access to all data. The group has launched our-policy.org [ 140 ] to suggest improvements to the new policy, which they saw as a backdrop for privacy on Facebook. Since the group managed to get more than 7.000 comments on Facebook's pages, Facebook had to do a worldwide vote on the proposed changes. Such a vote would have only been binding if 30% of all users would have taken part. Facebook did not promote the vote, resulting in only 0.038% participation with about 87% voting against Facebook's new policy. The new privacy policy took effect on the same day. [ 141 ] In early 2019 it was reported that Facebook had spent years lobbying extensively against privacy protection laws around the world, such as GDPR. [ 142 ] [ 143 ] The lobbying included efforts by Sandberg to "bond" with female European officials including Enda Kenny (then Prime Minister of Ireland , where Facebook's European operations are based), to influence them in Facebook's favor. [ 142 ] Other politicians reportedly lobbied by Facebook in relation to privacy protection laws included George Osborne (then Chancellor of the Exchequer ), Pranab Mukherjee (then President of India ), and Michel Barnier . [ 142 ] In 2021 Facebook attempted to use "a legal trick" to bypass GDPR regulations in the European Union by including personal data processing agreement in what they considered to be a "contract" (Article 6(1)(b) GDPR) rather than a "consent" (Article 6(1)(a) GDPR) which would lead to the user effectively granting Facebook a very broad permission to process their personal data with most of the GDPR controls void. Irish Data Protection Commission (DPC) expressed its preliminary approval for this bypass and sent its draft decision to other data protection authorities in the European Union, at which point the document was leaked to media and published on noyb.eu. [ 144 ] DPC sent a takedown notice to noyb.eu, which was also published by the portal which reject to self-censor. [ 145 ] In December 2019 the Hungarian Competition Authority fined Facebook around US$4 million for false advertising , ruling that Facebook cannot market itself as a "free" (no cost) service because the use of detailed personal information to deliver targeted advertising constituted a compensation that must be provided to Facebook to use the service. [ 146 ] Students who post illegal or otherwise inappropriate material have faced disciplinary action from their universities, colleges, and schools including expulsion. [ 147 ] Others posting libelous content relating to faculty have also faced disciplinary action. [ 148 ] The Journal of Education for Business states that "a recent study of 200 Facebook profiles found that 42% had comments regarding alcohol, 53% had photos involving alcohol use, 20% had comments regarding sexual activities, 25% had seminude or sexually provocative photos, and 50% included the use of profanity." [ 149 ] It is inferred that negative or incriminating Facebook posts can affect alumni's and potential employers' perception of them. This perception can greatly impact the students' relationships, ability to gain employment, and maintain school enrollment. The desire for social acceptance leads individuals to want to share the most intimate details of their personal lives along with illicit drug use and binge drinking. Too often, these portrayals of their daily lives are exaggerated and/or embellished to attract others like minded to them. [ 149 ] Students in general have a higher engagement when using Facebook groups in class, as students can comment on each other's short writings or videos. [ 150 ] Increased teacher-student and student-student interaction, improved performance, and convenience of learning were some of the benefits of using Facebook as an educational instrument. [ 151 ] However, it limits student's writing to be shorter since checking on spelling and typing on a phone keyboard is relatively more time-consuming. [ 150 ] On January 23, 2006 The Chronicle of Higher Education continued an ongoing national debate on social networks with an opinion piece written by Michael Bugeja, director of the Journalism school at Iowa State University , entitled "Facing the Facebook". [ 152 ] Bugeja, author of the Oxford University Press text Interpersonal Divide (2005), quoted representatives of the American Association of University Professors and colleagues in higher education to document the distraction of students using Facebook and other social networks during class and at other venues in the wireless campus . Bugeja followed up on January 26, 2007, in The Chronicle with an article titled "Distractions in the Wireless Classroom", [ 153 ] quoting several educators across the country who were banning laptops in the classroom. Similarly, organizations such as the National Association for Campus Activities , [ 154 ] the Association for Education in Journalism and Mass Communication , [ 155 ] and others have hosted seminars and presentations to discuss ramifications of students' use of Facebook and other social-networking sites. The EDUCAUSE Learning Initiative has also released a brief pamphlet entitled "7 Things You Should Know About Facebook" aimed at higher education professionals that "describes what [Facebook] is, where it is going, and why it matters to teaching and learning". [ 156 ] Some research [ 157 ] [ 158 ] [ 159 ] on Facebook in higher education suggests that there may be some small educational benefits associated with student Facebook use, including improving engagement which is related to student retention. [ 159 ] 2012 research has found that time spent on Facebook is related to involvement in campus activities. [ 158 ] This same study found that certain Facebook activities like commenting and creating or RSVPing to events were positively related to student engagement while playing games and checking up on friends was negatively related. Furthermore, using technologies such as Facebook to connect with others can help college students be less depressed and cope with feelings of loneliness and homesickness. [ 160 ] As of February 2012 only four published peer-reviewed studies have examined the relationship between Facebook use and grades. [ 157 ] [ 161 ] [ 162 ] [ 163 ] The findings vary considerably. Pasek et al. (2009) [ 163 ] found no relationship between Facebook use and grades. Kolek and Saunders (2008) [ 162 ] found no differences in overall grade point average (GPA) between users and non-users of Facebook. Kirschner and Karpinski (2010) [ 161 ] found that Facebook users reported a lower mean GPA than non-users. Junco's (2012) [ 157 ] study clarifies the discrepancies in these findings. While Junco (2012) [ 157 ] found a negative relationship between time spent on Facebook and student GPA in his large sample of college students, the real-world impact of the relationship was negligible. Furthermore, Junco (2012) [ 157 ] found that sharing links and checking up on friends were positively related to GPA while posting status updates was negatively related. In addition to noting the differences in how Facebook use was measured among the four studies, Junco (2012) [ 157 ] concludes that the ways in which students use Facebook are more important in predicting academic outcomes. Performative surveillance is the notion that people are very much aware that they are being surveilled on websites, like Facebook, and use the surveillance as an opportunity to portray themselves in a way that connotes a certain lifestyle—of which, that individual may, or may not, distort how they are perceived in reality. [ 164 ] In an effort to surveil the personal lives of current, or prospective, employees, some employers have asked employees to disclose their Facebook login information. This has resulted in the passing of a bill in New Jersey making it illegal for employers to ask potential or current employees for access to their Facebook accounts. [ 165 ] Although the U.S. government has yet to pass a national law protecting prospective employees and their social networking sites, from employers, the fourth amendment of the US constitution can protect prospective employees in specific situations. [ 166 ] [ 167 ] Many companies examine Facebook profiles of job candidates looking for reasons to not hire them. Because of this, many employees feel like their online social media rights and privacy are being violated. In addition, employees begin to make performative profiles where they purposefully portray themselves as professional and have desired personality traits. [ 166 ] According to a survey of hiring managers by CareerBuilder.com, the most common deal breakers they found on Facebook profiles include references to drinking, poor communication skills, inappropriate photos, and lying about skills and/or qualifications. [ 168 ] Facebook requires employees and contractors working for them to give permission for Facebook to access their personal profiles, including friend requests and personal messages. A 2011 study in the online journal First Monday examines how parents consistently enable children as young as 10 years old to sign up for accounts, directly violating Facebook's policy banning young visitors. This policy is in compliance with a United States law, the 1998 Children's Online Privacy Protection Act , which requires minors aged under 13 to gain explicit parental consent to access commercial websites. In jurisdictions where a similar law sets a lower minimum age, Facebook enforces the lower age. Of the 1,007 households surveyed for the study, 76% of parents reported that their child joined Facebook at an age younger than 13, the minimum age in the site's terms of service. The study also reported that Facebook removes roughly 20,000 users each day for violating its minimum age policy. The study's authors also note, "Indeed, Facebook takes various measures both to restrict access to children and delete their accounts if they join." The findings of the study raise questions primarily about the shortcomings of United States federal law, but also implicitly continue to raise questions about whether or not Facebook does enough to publicize its terms of service with respect to minors. Only 53% of parents said they were aware that Facebook has a minimum signup age; 35% of these parents believe that the minimum age is merely a recommendation or thought the signup age was 16 or 18, not 13. [ 169 ] Phishing refers to a scam used by criminals to trick people into revealing passwords, credit card information, and other sensitive information. On Facebook, phishing attempts occur through message or wall posts from a friend's account that was breached. If the user takes the bait, the phishers gain access to the user's Facebook account and send phishing messages to the user's other friends. The point of the post is to get the users to visit a website with viruses and malware. [ 168 ] In April 2016 Buzzfeed published an article exposing drop shippers who were using Facebook and Instagram to swindle unsuspecting customers. Located mostly in China, these drop shippers and e-commerce sites would steal copyrighted images from larger retailers and influencers to gain credibility. After luring a customer with a low price for the item, they would then deliver a product that is nothing like what was advertised or deliver no product at all. [ 170 ]
https://en.wikipedia.org/wiki/Privacy_concerns_with_Facebook
The term private-collective model of innovation was coined by Eric von Hippel and Georg von Krogh in their 2003 publication in Organization Science . [ 1 ] This innovation model represents a combination of the private investment model and the collective-action innovation model. In the private investment model innovators appropriate financial returns from innovations through intellectual property rights such as patents, copyright, licenses, or trade secrets. Any knowledge spillover reduces the innovator's benefits, thus freely revealed knowledge is not in the interest of the innovator. The collective-action innovation model explains the creation of public goods which are defined by the non-rivalry of benefits and non-excludable access to the good. In this case the innovators do not benefit more than any one else not investing into the public good, thus free-riding occurs. In response to this problem, the cost of innovation has to be distributed, therefore governments typically invest into public goods through public funding. As combination of these two models, the private-collective model of innovation explains the creation of public goods through private funding. The model is based on the assumption that the innovators privately creating the public goods benefit more than the free-riders only consuming the public good. While the result of the investment is equally available to all, the innovators benefit through the process of creating the public good. Therefore, private-collective innovation occurs when the process-related rewards exceed the process-related costs. [ 2 ] [ 3 ] A laboratory study [ 4 ] traced the initiation of private-collective innovation to the first decision to share knowledge in a two-person game with multiple equilibria. The results indicate fragility: when individuals face opportunity costs to sharing their knowledge with others they quickly turn away from the social optimum of mutual sharing. The opportunity costs of the "second player", the second person deciding whether to share, have a bigger (negative) impact on knowledge sharing than the opportunity costs of the first person to decide. Overall, the study also observed sharing behavior in situations where none was predicted. Recent work [ 5 ] shows that a project will not "take off" unless the right incentives are in place for innovators to contribute their knowledge to open innovation from the beginning. The article [ 5 ] explores social preferences in the initiation of PCI. It conducted a simulation study that elucidates how inequality aversion, reciprocity, and fairness affect the underlying conditions that lead to the initiation of Private-collective innovation. While firms increasingly seek to cooperate with outside individuals and organizations to tap into their ideas for new products and services, mechanisms that motivate innovators to "open up" are critical in achieving the benefits of open innovation. The theory of private collective innovation has recently been extended by a study on the exclusion rights for technology in the competition between private-collective and other innovators. [ 6 ] The authors argue that the investment in orphan exclusion rights for technology serves as a subtle coordination mechanism against alternative proprietary solutions. Additionally, the research on private-collective innovation has been extended with theoretical explanations and empirical evidence of egoism and altruism as significant explanations for cooperation in private-collective innovation. Benbunan-Fich and Koufaris [ 7 ] show that contributions to a social bookmarking site are a combination of intentional and unintentional contributions. The intentional public contribution of bookmarks is driven by an egoistic motivation to contribute valuable information and thus showing competence. The development of open source software / Free Software (consequently named Free and Libre Open Source Software – FLOSS ) is the most prominent example of private-collective innovation. [ 8 ] By definition, FLOSS represents a public good . It is non-rival because copying and distributing software does not decrease its value. And it is non-excludable because FLOSS licenses enable everyone to use, change and redistribute the software without any restriction. While FLOSS is created by many unpaid individuals, it has been shown [ 9 ] [ 10 ] that technology firms invest substantially in the development of FLOSS. These companies release previously proprietary software under FLOSS licenses, employ programmers to work on established FLOSS projects, and fund entrepreneurial firms to develop certain features. In this way, private entities invest into the creation of public goods.
https://en.wikipedia.org/wiki/Private-collective_model_of_innovation
PrivateBin is a self-hosted and open-source pastebin software that deletes pasted text after a visit. It can be configured to not delete the paste after first view, at which point there is an option of commenting and replying to the paste, like in a forum . [ 2 ] All pastes on PrivateBin are encrypted with a password. [ 3 ] [ 4 ] The encryption is both during transport and at rest. [ 5 ] PrivateBin was forked from ZeroBin . PrivateBin also supports: [ 6 ] This software article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/PrivateBin
A Private Shared Wireless Network ( PSWN ) is a wide area wireless radio telecommunications network developed and provided by different entities specifically for the use of public safety, utilities, [ 1 ] [ 2 ] machine to machine , and business communications. Its broad area coverage allows for a greater signal range and a lower cost of implementation. Public safety agencies and businesses utilize [ 3 ] Private Shared Wireless Networks to send and receive data, communicate, and receive diagnostics information on their fixed equipment, vehicles or employees. A Private Shared Wireless Network is built to operate on frequencies that are separate from those of public cellular communications networks and other publicly accessible wireless cellular or radio networks so to avoid their associated network congestion and security vulnerabilities . Since 9/11 , when data interoperability between first responders radio communications [ 4 ] and interference issues with public networks led to loss of life, [ 5 ] the US federal government has expressed a need for an interoperable communications networks for first responders. [ 6 ] One of the U.S. government's attempts at creating a private shared wireless network occurred in 2008, with FCC Spectrum Auction 73 . The radio spectrum being sold was divided into five blocks, A, B, C, D and E. [ 7 ] It was the D block that held special conditions placed upon the winner; mainly that whoever purchased this spectrum would be required to build out a next generation network that will also support public safety broadband services. [ 8 ] The only bid [ 9 ] on this block of spectrum did not meet the minimum reserve price, [ 10 ] and thus the shared network was never realized. As a result, many telecommunications equipment providers have begun building their own [ 11 ] private shared networks, [ 12 ] and selling various services and applications to businesses, [ 13 ] [ 14 ] such as AVL /GPS, asset tracking via RFID & barcode scanning , and 2-way messaging. A Private Shared Wireless Network is made up of multiple communications towers and base stations , each at a fixed location, spread about a given geographic area for ubiquitous coverage . The data exchange generally occurs between a company's assets via a vehicle mounted mobile data terminal , or via an RFID , and a central dispatching logistics center. This allows the dispatcher the ability to monitor and correct the performance of these assets remotely. [ 15 ] Private Shared Wireless Networks are generally engineered for data-only communications networks. This allows for a greater number of users on the network and more reliable communications, as data makes much more efficient use of wireless spectrum than does voice. [ 16 ] Many regional and municipal governments have begun to create and engineer their own private networks over multiple tower infrastructures. [ 17 ] Companies such as Cisco Systems have invested in Private Shared Wireless Networks as part of their strategic planning. [ 18 ] Private Shared Wireless Networks are an evolving solution addressing the communication and data requirements of government at the national, regional and municipal level, as well as utility companies, transportation companies and businesses which require a private solution operating via an established tower based network. US government mandates have fueled a need for first responders to operate in an interoperable manner. [ 19 ] A Private Shared Wireless Network is a network available to businesses who seek an alternative from those hosted by cellular companies.
https://en.wikipedia.org/wiki/Private_Shared_Wireless_Network
Private cloud computing infrastructure is a category of cloud computing that provides comparable benefits to public cloud systems, such as self-service and scalability , but it does so via a proprietary framework. In contrast to public clouds, which cater to multiple entities, a private cloud is specifically designed for the requirements and objectives of one organization. A private cloud computing infrastructure constitutes a distinctive model of cloud computing that facilitates a secure and distinct cloud environment where only the intended client can function. [ 1 ] It can either be physically housed in the organization's in-house data center or be managed by a third-party provider. In a private cloud, the infrastructure and services are always sustained on a private network, and both the hardware and software are devoted exclusively to a single organization. [ 2 ] [ 3 ] The concept of private cloud infrastructure started to take shape around the mid-2000s, coinciding with the rise of other cloud computing forms. It came into existence as a solution to the shortcomings of public clouds, particularly concerns over data control, security, and network performance. [ 4 ] [ 5 ] IT departments began to mirror the automation and self-service features of the public cloud in their data centers. Over time, these services became more advanced, and private cloud technology has been refined to address businesses and organizations' diverse needs. [ 1 ] Private cloud computing infrastructure generally involves a mix of hardware, network infrastructure, and virtualization software. [ 6 ] [ 7 ] [ 8 ] Private cloud infrastructures are usually utilized by medium to large businesses and organizations that need robust control over their data, have extensive computing needs, or have specific regulatory or compliance obligations. This includes healthcare organizations, government agencies, financial institutions , and any business that needs to process and store large data volumes. [ 9 ]
https://en.wikipedia.org/wiki/Private_cloud_computing_infrastructure
In computer networking, a private message , personal message , or direct message (abbreviated as PM or DM ) refers to a private communication, often text-based, sent or received by a user of a private communication channel on any given platform. Unlike public posts, PMs are only viewable by the participants. Long a function present on IRCs [ 1 ] and Internet forums , [ 2 ] private channels for PMs have also been prevalent features on instant messaging (IM) and on social media networks. [ 3 ] It may be either synchronous (e.g. on an IM) or asynchronous (e.g. on an Internet forum). The term private message (PM) originated as a feature on Facebook Messenger , while the term direct message (DM) originated as a feature on Twitter . Due to the popularity of the latter service, DM has since been appropriated by other platforms, such as Instagram , and is often genericized in popular usage. [ 4 ] [ 5 ] There are two main types of private messages: Besides serving as a tool to connect privately with friends and family, PMs have gained momentum in the workplace. Working professionals use PMs to reach coworkers in other spaces and increase efficiency during meetings. Although useful, using PMs in the workplace may blur the boundary between work and private lives. [ 10 ] [ 11 ] [ 8 ] [ 12 ] Some common forms of private messaging today include Facebook messaging (sometimes referred to as "inboxing"), Twitter direct messaging, and Instagram direct messaging. These forms of private messaging provide a private space on a usually public site. For instance, most activity on Twitter is public, but Twitter DMs provide a private space for communication between two users. This differs from mediums like email, texting, and Snapchat , where most or all activity is always private. [ 13 ] Modern forms of private messaging may include multimedia messages, such as pictures or videos. [ 14 ] [ 15 ] [ 16 ] [ 17 ] The development of computers sparked the information revolution , which changed the way people communicate. Peter Drucker published an article centering on the theme that the computer is to the Information Revolution what the railroad was to the Industrial Revolution ; railroads unified travel between the east and west coast of the United States, whereas computers unified communication across the entire globe. This revolutionized many different forms of communication, but particularly the personal message. [ citation needed ] [ citation needed ] The first email system able to send mail between people using different host computers was launched via the ARPANET in 1971, and it revolutionized personal messaging by enabling users to send electronic messages to distant recipients. [ 18 ] The popularity of email has since skyrocketed, and it continues to be a widely-used means of personal messaging. The advent of the Internet paved the way for communication through platforms and website portals like Yahoo! , and AOL . Instant messaging systems became popular in the late 1990s, and as Internet communication links improved and personal computers became more capable, this functionality was merged into systems that also included voice and video communication, such as Skype (launched in 2003). In 2008, Facebook Chat launched, which evolved into Facebook Messenger in 2011 and allows users to message each other via the Facebook site. Twitter followed suit and introduced direct messages to their site in 2013. [ citation needed ] Today, private messaging is a staple of established social media platforms and more recently-developed applications. In January 2014, Matthew Campbell and Michael Hurley filed a class-action lawsuit against Facebook for breaching the Electronic Communications Privacy Act . They alleged that private messages which contained URLs were being read and used to generate profit, through data mining and user profiling , and that it was misleading for Facebook to refer to the functionality as "private" with the implication that the communication was "free from surveillance". [ 19 ] In 2012, some Facebook users misinterpreted a redesign of the Facebook wall as publicly sharing private messages from 2008–2009. These were found to be public wall posts from those years, made at a time when it was not possible to like or comment on a wall post, making the notes look like private messages. [ 20 ]
https://en.wikipedia.org/wiki/Private_message
The Prix Georges Lemaître is an award created in 1995, in celebration of the centenary of the birth in 1894 of Georges Lemaître . The Association des Anciens et Amis de l'Université catholique de Louvain (Association of the Alumni and Friends of the Université catholique de Louvain ) initiated the award, [ 1 ] as well as the Fondation Georges Lemaître (Georges Lemaître Foundation). The prize, endowed with 25,000 euros as of 2003, [ 2 ] is awarded every two years [ 3 ] to a scientist who has made a remarkable contribution to développement et à la diffusion des connaissances dans les domaines de la cosmologie, de l'astronomie, de l'astrophysique, de la géophysique, ou de la recherche spatiale [ 1 ] (development and dissemination of knowledge in the fields of cosmology, astronomy, astrophysics, geophysics, or space science). The winner is chosen by an international jury of scientists, chaired by the rector of the Université catholique de Louvain. This science awards article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Prix_Georges_Lemaître
The Prix Pierre Guzman ( Pierre Guzman Prize ) was the name given to two prizes, one astronomical and one medical. Both were established by the will of Anne Emilie Clara Goguet (died June 30, 1891), wife of Marc Guzman, and named after her son Pierre Guzman. [ 1 ] This prize was a sum of 100,000 francs, to be given to a person who succeeded in communicating with a celestial body, other than Mars, and receiving a response. Until this occurred, the will also allowed for the accumulated interest on the 100,000 francs to be given, every five years, to a person who had made significant progress in astronomy. The prize was to be awarded by the French Académie des sciences . [ 1 ] Pierre Guzman had been interested in the work of Camille Flammarion , the author of La planète Mars et ses conditions d'habitabilité (The Planet Mars and Its Conditions of Habitability, 1892). Communication with Mars was specifically exempted as many people believed that Mars was inhabited at the time and communication with that planet would not be a difficult enough challenge. [ 2 ] The prize was later announced in 1900 by the French Académie des sciences . [ 3 ] The five-yearly prize of interest was awarded, starting in 1905, as follows: The prize was awarded to the crew of Apollo 11 in 1969. [ 9 ] [ 10 ] This prize was a sum of 50,000 francs, to be awarded by the French Académie de médecine , to be given to a person who succeeded in developing an effective treatment for the most common forms of heart disease. Until this occurred, the will also allowed for the accumulated interest to be given yearly to someone who had made progress in heart disease. [ 1 ] [ 11 ] The yearly prize of interest was awarded as follows:
https://en.wikipedia.org/wiki/Prix_Guzman