text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
Photopolymerization-based signal amplification (PBA) is a method of amplifying detection signals from molecular recognition events in an immunoassay by utilizing a radical polymerization initiated through illumination by light. [ 1 ] To contrast between a negative and a positive result, PBA is linked to a colorimetric method, thereby resulting in a change in color when a targeted analyte is detected, i.e., a positive signal. PBA is also used to quantify the concentration of the analyte by measuring intensity of the color. [ 2 ]
PBA is achieved by sequentially adding three kinds of solutions to a test strip and illuminating it with green light. First, a droplet of a patient’s sample is loaded on a test strip whose surface is covered with immobilized antibodies . If the sample contains the target antigens , they bind to the immobilized antibodies. (Figure 1a)
Second, eosin -conjugated antibodies are added to the patient’s sample. This second antibody specifically binds with the bound antigens, thereby causing each bound antigen to be sandwiched between the first and eosin-conjugated antibodies. (Figure 1b) After ten minutes, the droplet on the surface is rinsed away in order to make sure that only the sandwiched binding complexes are left on the surface before adding the third solution. [ 1 ]
Lastly, a droplet of mixture of monomers (e.g., PEGDA and N-vinyl pyrrolidone ) and phenolphthalein is added to the test strip, and the droplet is illuminated with green visible light, by which the eosin molecules become excited and produce radicals . (Figure 1c) As a result, propagation is caused and polymers are formed. [ citation needed ]
Since phenolphthalein molecules are surrounded by the polymers and thus left on the surface even after another rinse, the test strip turns red when a base is added. (Figure 1d) On the other hand, if the patient’s sample does not include any targeted antigens, the sandwiched binding complexes on the surface will not be formed, which leads to no red color. [ 1 ]
Many radical polymerizations, including ATRP and RAFT , cannot occur in an ambient environment because dissolved oxygen molecules can rapidly react with active radical species and form less active peroxyl radicals, thus inhibiting the radical polymerizations. [ 3 ] Eosin-sensitized photopolymerization, on the other hand, can overcome this inhibition by dissolved oxygen with only sub-micromolar concentrations of free eosin in a PBA system, which allows radical polymerization even in an ambient environment. [ 4 ] A great number of mechanisms [ 3 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] have been proposed to explain this special ability of eosin, but the most recent focuses on the regeneration of eosin with the production of superoxide . [ 8 ]
As can be seen in Figure 2, Liang et al. proposed that eosin radicals (Eosin Y ⋅ − {\displaystyle Y\cdot ^{-}} ) react with oxygen, regenerating Eosin Y. [ 8 ] In this mechanism, when the ground state of eosin (Eosin Y) absorbs visible light, the eosin becomes excited (Eosin Y*). Then, it is reduced—given one electron—through a reaction with a tertiary amine while generating the amine radical and the eosin radical. This eosin radical is oxidized by the reaction with oxygen, so Eosin Y can be regenerated. The regeneration of eosin makes the PBA efficient because oxygen is consumed through this photocatalytic cycle of Eosin Y, so polymerization can take place in an ambient environment even if there is only a few micromolar concentration of Eosin Y. [ 8 ]
Eosin Y is not the only molecule with significant resilience to inhibition of oxygen, and methylene blue can also initiate photopolymerization in the presence of oxygen as Padon reported. [ 9 ] However, Eosin Y has been regarded as the best photoinitiator due to its long excitation lifetime and low fluorescence quantum yield (Φ), which allows it to react with tertiary amine and generate radicals much faster than other alternatives. [ 10 ]
Quantification with PBA can be achieved by measuring intensity of the red color from phenolphthalein because brighter red emerges when the sample contains higher concentration of target antigens. For instance, if more antigens are bound to the surface antibodies, more eosin-conjugated antibodies will also bind to the bound analytes. Thus, photopolymerization on the surface becomes much faster and forms a thicker hydrogel film in which phenolphthalein molecules are trapped. Since more phenolphthalein molecules can remain in the thicker film after further rinsing, the indicators can give a higher intensity of red. [ 2 ]
Many other polymerization methods have been implemented as a signal amplification tool: ATRP (atom-transfer radical polymerization), RAFT (reversible addition-fragmentation chain transfer polymerization), and enzyme-mediated redox polymerization. [ 1 ] However, many of them are not available in ambient systems because they are susceptible to inhibition by oxygen. In order to solve inhibition in ambient systems, air-tolerant ATRP-based signal amplification [ 11 ] has been developed. This method provides better sensitivity (~0.2 pmol) [ 11 ] than eosin-sensitized PBA (1~10 nmol), [ 2 ] but the air-tolerant ATRP takes much more time (~1 hour) [ 11 ] to obtain the high sensitivity than the PBA (~100 seconds). [ 2 ] | https://en.wikipedia.org/wiki/Photopolymerization-based_signal_amplification |
Photoprotection is the biochemical process that helps organisms cope with molecular damage caused by sunlight . Plants and other oxygenic phototrophs have developed a suite of photoprotective mechanisms to prevent photoinhibition and oxidative stress caused by excess or fluctuating light conditions. Humans and other animals have also developed photoprotective mechanisms to avoid UV photodamage to the skin, prevent DNA damage , and minimize the downstream effects of oxidative stress.
In organisms that perform oxygenic photosynthesis , excess light may lead to photoinhibition , or photoinactivation of the reaction centers , a process that does not necessarily involve chemical damage. When photosynthetic antenna pigments such as chlorophyll are excited by light absorption, unproductive reactions may occur by charge transfer to molecules with unpaired electrons. Because oxygenic phototrophs generate O 2 as a byproduct from the photocatalyzed splitting of water (H 2 O) , photosynthetic organisms have a particular risk of forming reactive oxygen species . [ citation needed ]
Therefore, a diverse suite of mechanisms has developed in photosynthetic organisms to mitigate these potential threats, which become exacerbated under high irradiance, fluctuating light conditions, in adverse environmental conditions such as cold or drought, and while experiencing nutrient deficiencies which cause an imbalance between energetic sinks and sources.
In eukaryotic phototrophs, these mechanisms include non-photochemical quenching mechanisms such as the xanthophyll cycle , biochemical pathways which serve as "relief valves", structural rearrangements of the complexes in the photosynthetic apparatus, and use of antioxidant molecules. Higher plants sometimes employ strategies such as reorientation of leaf axes to minimize incident light striking the surface. Mechanisms may also act on a longer time-scale, such as up-regulation of stress response proteins or down-regulation of pigment biosynthesis, although these processes are better characterized as "photoacclimatization" processes.
Cyanobacteria possess some unique strategies for photoprotection which have not been identified in plants nor in algae. [ 1 ] For example, most cyanobacteria possess an Orange Carotenoid Protein (OCP), which serves as a novel form of non-photochemical quenching. [ 2 ] Another unique, albeit poorly-understood, cyanobacterial strategy involves the IsiA chlorophyll-binding protein , which can aggregate with carotenoids and form rings around the PSI reaction center complexes to aid in photoprotective energy dissipation. [ 3 ] Some other cyanobacterial strategies may involve state-transitions of the phycobilisome antenna complex [ 4 ] , photoreduction of water with the Flavodiiron proteins, [ 5 ] and futile cycling of CO 2 [ 6 ] .
It is widely known that plants need light to survive, grow and reproduce. It is often assumed that more light is always beneficial; however, excess light can actually be harmful for some species of plants. Just as animals require a fine balance of resources, plants require a specific balance of light intensity and wavelength for optimal growth (this can vary from plant to plant). Optimizing the process of photosynthesis is essential for survival when environmental conditions are ideal and acclimation when environmental conditions are severe. When exposed to high light intensity, a plant reacts to mitigate the harmful effects of excess light.
To best protect themselves from excess light, plants employ a multitude of methods to minimize harm inflicted by excess light. A variety of photoreceptors are used by plants to detect light intensity, direction and duration. In response to excess light, some photoreceptors have the ability to shift chloroplasts within the cell farther from the light source thus decreasing the harm done by superfluous light. [ 7 ] Similarly, plants are able to produce enzymes that are essential to photoprotection such as Anthocyanin synthase. Plants deficient in photoprotection enzymes are much more sensitive to light damage than plants with functioning photoprotection enzymes. [ 8 ] Also, plants produce a variety of secondary metabolites beneficial for their survival and protection from excess light. These secondary metabolites that provide plants with protection are commonly used in human sunscreen and pharmaceutical drugs to supplement the inadequate light protection that is innate to human skin cells. [ 9 ] Various pigments and compounds can be employed by plants as a form of UV photoprotection as well. [ 10 ]
Pigmentation is one method employed by a variety of plants as a form of photoprotection. For example, in Antarctica, native mosses of green color can be found naturally shaded by rocks or other physical barriers while red colored mosses of the same species are likely to be found in wind and sun exposed locations. This variation in color is due to light intensity. Photoreceptors in mosses, phytochromes (red wavelengths) and phototropins (blue wavelengths), assist in the regulation of pigmentation. To better understand this phenomenon, Waterman et al. conducted an experiment to analyze the photoprotective qualities of UVACs (Ultraviolet Absorbing Compounds) and red pigmentation in antarctic mosses. Moss specimens of species Ceratodon purpureus, Bryum pseudotriquetrum and Schistidium antarctici were collected from an island region in East Antarctica . All specimens were then grown and observed in a lab setting under constant light and water conditions to assess photosynthesis, UVAC and pigmentation production. Moss gametophytes of red and green varieties were exposed to light and consistent watering for a period of two weeks. Following the growth observation, cell wall pigments were extracted from the moss specimens. These extracts were tested using UV–Vis spectrophotometry which uses light from the UV and visible spectrum to create an image depicting light absorbance. UVACs are typically found in the cytoplasm of the cell; however, when exposed to high-intensity light, UVACs are transported into the cell wall. It was found that mosses with higher concentrations of red pigments and UVACs located in the cell walls, rather than intracellularly, performed better in higher intensity light. Color change in the mosses was found not to be due to chloroplast movement within the cell. It was found that UVACs and red pigments function as long-term photoprotection in Antarctic mosses. Therefore, in response to high-intensity light stress, the production of UVACs and red pigmentation is up-regulated. [ 10 ]
Knowing that plants are able to differentially respond to varying concentrations and intensities of light, it is essential to understand why these reactions are important. Due to a steady rise in global temperatures in recent years, many plants have become more susceptible to light damage. Many factors including soil nutrient richness, ambient temperature fluctuation and water availability all impact the photoprotection process in plants. Plants exposed to high light intensity coupled with water deficits displayed a significantly inhibited photoprotection response. [ 11 ] Although not yet fully understood, photoprotection is an essential function of plants.
Photoprotection of the human skin is achieved by extremely efficient internal conversion of DNA, proteins and melanin. Internal conversion is a photochemical process that converts the energy of the UV photon into small, harmless amounts of heat. If the energy of the UV photon were not transformed into heat, then it would lead to the generation of free radicals or other harmful reactive chemical species (e.g. singlet oxygen, or hydroxyl radical).
In DNA this photoprotective mechanism evolved four billion years ago at the dawn of life. [ 12 ] The purpose of this extremely efficient photoprotective mechanism is to prevent direct DNA damage and indirect DNA damage . The ultrafast internal conversion of DNA reduces the excited state lifetime of DNA to only a few femtoseconds (10 −15 s)—this way the excited DNA does not have enough time to react with other molecules.
For melanin this mechanism has developed later in the course of evolution. Melanin is such an efficient photoprotective substance that it dissipates more than 99.9% of the absorbed UV radiation as heat. [ 13 ] This means that less than 0.1% of the excited melanin molecules will undergo harmful chemical reactions or produce free radicals.
In the European Union and United States, afamelanotide is indicated for the prevention of phototoxicity in adults with erythropoietic protoporphyria. [ 14 ] [ 15 ] [ 16 ] Afamelanotide is also being investigated as a method of photoprotection from in the treatment of polymorphous light eruption , actinic keratosis and squamous cell carcinoma (a form of skin cancer ). [ 17 ]
The cosmetic industry claims that the UV filter acts as an "artificial melanin". But those artificial substances used in sunscreens do not efficiently dissipate the energy of the UV photon as heat. Instead these substances have a very long excited state lifetime. [ 18 ] In fact, the substances used in sunscreens are often used as photosensitizers in chemical reactions. (see Benzophenone ).
Oxybenzone , titanium oxide and octyl methoxycinnamate are photoprotective agents used in many sunscreens, providing broad-spectrum UV coverage, including UVB and short-wave UVA rays. [ 19 ] [ 20 ] | https://en.wikipedia.org/wiki/Photoprotection |
Photoproteins are a type of enzyme produced by bioluminescent organisms. They add to the function of the luciferins whose usual light-producing reaction is catalyzed by the enzyme luciferase .
The term photoprotein was first used to describe the unusual chemistry of the luminescent system of Chaetopterus (a marine Polychaete worm). [ 1 ] This was meant to distinguish them from other light-producing proteins because these do not exhibit the usual luciferin - luciferase reaction. [ 2 ]
Photoproteins do not display typical enzyme kinetics as seen in luciferases . Instead, when mixed with luciferin, they display luminescence proportional to the amount of the photoprotein. For example, the photoprotein aequorin produces a flash of light when luciferin and calcium are added, rather than the prolonged glow that is seen for luciferases when luciferin is added. In this respect, it may appear that photoproteins are not enzymes, when in fact they do catalyze their bioluminescence reactions. This is due to a fast catalytic step, which produces the light, and a slow regeneration step, where the oxyluciferin is freed and another molecule of luciferin is then enabled to bind to the enzyme. [ 3 ] Because of the kinetically slow step, each aequorin molecule must "recharge" with another molecule of luciferin before it can emit light again, and this makes it appear as though it is not behaving as a typical enzyme.
Photoproteins form a stable luciferin-photoprotein complex, often until the addition of another required factor such as Ca 2+ in the case of aequorin . | https://en.wikipedia.org/wiki/Photoprotein |
Photopyroelectric As known that Photopyroelectric can be regarded as –Photo +Pyroelectric,which means any optical systems using a pyroelectric detector or imaging system, In addition, pyroelectricity could be depicted as the capability of the components formulating the transient voltage when heated or cooled. Once the temperature on which they depend changes, the position of the atom will change slightly in the crystal structure. [ 1 ] This process of change can also be referred to as the polarization of the material. As a result, the voltage across the crystal will be triggered by this change in polarization. To further explain, when the temperature in the engine is kept constant for a period of time, the voltage in the photovoltage will gradually disappear due to the leakage current. In this sense, leakage is mainly caused by several ways, for example, electrons going through the crystal, ions going through the air, or current leaking through a voltmeter connected to the crystal.
The photopyroelectric refers to the technique of the optimal system which is mainly based on the imaginary system and the pyroelectric detector.
In terms of the pyroelectric detector, it can be used as a sensor to support the system. Due to the unipolar axis characteristics of the pyroelectric crystal, it is characterized by asymmetry. Polarization due to changes in temperature, the so-called pyroelectric effect, is currently widely used in sensor technology. Pyroelectric crystals need to be very thin to prepare and are plated in a direction perpendicular to the polar axis. An absorbing layer (blackening layer) is also required on the upper electrode. When this absorbing layer is exposed to infrared radiation, the pyroelectric chip is heated and produces a surface electrode. [ 2 ] If the amount of radiation is interrupted, a charge opposite to the direction of polarization is generated. However, this charge is very small, so the charge is converted to a signal voltage by ultra low noise and ultra low leakage field effect transistors (JFET) or operational amplifiers (OpAmp) before neutralized by the internal resistance of the crystal. [ 3 ] Pyroelectric detectors have a high signal-to-noise ratio even at 4K Hz. [ 4 ] For example, in a Fourier infrared spectrometer, a thermopile can only perform better at a few hertz.
In terms of the imaginary system, it is a general term for various types of remote sensor systems that acquire remote sensing images of objects without photography. [ 5 ] Scanning is usually used for imaging, tape recording or indirect recording on film. According to the structure of the system, the scanning method and the detector parts are roughly divided into:
1. Optomechanical scanning. Such as multi-spectral scanners. The mirror is used to scan the object surface, and the image data is output after being split, detected and photoelectrically converted.
2. Electronic scanning. For example, a return beam guiding TV camera, is an image-side scanning method. The process is optical imaging on the target surface of the light guide, and the signal is amplified and output after being scanned by the electron beam.
3. Robust self-scanning. For example, the photoelectric scanning sensor of the French SPOT satellite is also an image scanning method. The object is imaged by an objective lens on a detector array consisting of a plurality of charge coupled devices (CCDs) that are photoelectrically converted and output.
4. Antenna scanning. Such as side-view radar, which is an active remote sensing imaging system that is a surface scanning method. It transmits the microwave beam through the antenna and receives an echo reflected by the scene, which is demodulated and output. [ 6 ]
The use of optoelectronics tells us that previous optoelectronic structures were used to check the thermal efficiency of certain materials that were composite and inserted into the detection unit as a liner. This technique depends on the coupled fluid thickness scanning process (TWRC method). Two special composites were chosen for this study: (I) Liquid: Nanofluid based on water and containing gold nanoparticles (ii) More solid type: Urea - Fumaric acid eutectic in a ratio of 1:1. It has been found that the thermal effusivity is independent upon the volume and concentration in the gold particles. Considering the eutectic characterized by urea-fumaric acid, it can be reasonably concluded that the value of the heat permeable compound is quite different from that of the pure raw material. This illustrates the production of compounds. [ 7 ]
Self-consistence photopyroelectric calorimetry for liquids This photopyroelectric also demonstrate that the front photopyroelectric (FPPE)structure is also important. In addition, it clearly explains the Thermal Wave Resonator Cavity (TWRC) method, which is designed to check the thermal mobility and diffusivity of liquids. It has demonstrated that the same type of technology is capable of producing a variety of static and dynamic thermal parameters. In addition, two of these parameters are checked and calculated in a straightforward manner, while the other two are still calculated indirectly. [ 8 ] This method shows the principle of sustainability in that it studies certain liquids such as various oils, water, glycerin, ethylene glycol and the like. [ 9 ]
Due to fluid processing, photoelectric effect and thermoelectric measurement and subtraction between the sample and the detector, the optoelectronic technology used in the standard distribution systematically underestimates the thermal diffusivity of the solid sample. In order to solve the negative effects in the process of treating fluids in this study, a completely new method will be proposed. It depends on the application of the transparent thermoelectric sensor as well as the transparent coupling of the fluid, as well as the self-standardization process. In this sense, it is easy to measure examples of accurate opacity and solidity of thermal diffusivity, as well as the light absorption coefficient of translucent solid samples. [ 10 ]
Photoelectron display thermophysical studies for simultaneous thermals are very important and critical in many relevant academic sciences. The heating capacity is closely related to the microstructure of the approved material and is important in monitoring the energy content of the system. Therefore, calorimetry plays an important role in the cataloging of physical systems, especially in the transition phase where energy fluctuations are very important. This paper summarizes the ability of photothermographic technology to study the variation of certain heat and other thermal parameters with temperature and is closely related to the transition. [ 11 ] The working principle is applied to the theoretical basis, and the experimental structure and additional benefits of the technology compared with the traditional technology are described in detail. [ 12 ] The integration in the calorimetric setting provides the possibility of performing calorimetric studies while also depicting the complementary nature of optical, structural and electrical properties. This paper reviews the high temperature resolution results for several phase transition parameters in different systems under various possible configurations.
Optimized configuration of pyroelectric sensors in optoelectronic technology. It has been shown that in the case of constant laser power, the response of the pyroelectric sensor would not depend on the spatial distribution of the intensity of the laser beam. [ 13 ] Therefore, depending on the voltage model, the signal amplitude will be inversely proportional to the effective range of the sensor. In addition, the thermoelectric signal may increase once the effective area decreases and the total area of the sensor remains constant. Based on this, by optimizing the metal electrode structure of the sensor, a method is proposed to improve the PPE signal measured in voltage mode. [ 14 ] The experiment shows that this improved method can increase the signal amplitude by 10 times without increasing the electrical noise.
Types of deficiency
The so-called optical component surface defects mainly refer to surface rickets and surface contaminants. Surface rickets refer to various processing defects such as pitting, scratches, open bubbles, broken edges, and broken spots on the surface of polished optical components. The main reason is processing or subsequent processing. Scratches are the scratches on the surface of an optical component. Due to the length of the scratch, it can be divided into long scratches and short scratches, with a limit of 2 mm. If the scratch length is greater than 2 mm, it is a long scratch, and if it is less than 2 mm, it is a short scratch . [ 15 ] For short scratches, the evaluation criterion is to detect their cumulative length. Relatively speaking, scratches are easier to detect than defects such as pitting.
Pitting refers to pits and defects on the surface of an optical component. The surface roughness in the pit is large, the width and depth are approximately the same, and the edges are irregular. Typically, defects with an aspect ratio greater than 4:1 are scratches, while defects less than 4:1 are pitting.
The bubbles are formed by gases that are not removed in time during the manufacture or processing of the optical component. Since the pressure of the gas in each direction is evenly distributed, the shape of the bubble is usually spherical.
Broken edges are a criticism of the edge of optical components. Although it is outside the effective area of the light source, it is also a source of light scattering, which also has an effect on optical performance.
Negative impact caused by the deficiency
Surface rickets, as a microscopic local defect caused by man-made process, have a certain influence on the surface properties of optical components, which may lead to serious consequences such as optical instrument operation errors. In short, the surface defects of optical components can be detrimental to the performance of optical systems, and the root cause is the scattering characteristics of light. [ 16 ] The damage of optical component surface defects to itself and the entire optical system is manifested in the following aspects:
(1) The quality of the beam is degraded. The surface scattering defect of the component produces a scattering effect of light, so that the energy of the beam is greatly consumed after passing through the defect, thereby reducing the quality of the beam.
(2) The thermal effect of defects. Since the area where the surface defects are located absorbs more energy than other areas, the thermal effect phenomenon may cause local particial deformation of the component, damage the film layer, etc., and thus damage the entire optical system.
(3) Damage to other optical components in the system. In a laser system, under the illumination of a high-energy laser beam, the scattered light generated by the surface of the component is absorbed by other optical components in the system, resulting in uneven light received by the component. [ 17 ] When the damage threshold of the optical component material is reached. The quality of the transmitted light is affected, and the optical components are damaged, which is more likely to cause serious damage to the optical system. [ 18 ]
(4) Rickets can affect the cleanliness of the field of view. When there are too many rickets on the optical components, it will affect the microscopic aesthetics. In addition, the cockroaches will leave tiny dust, microorganisms, polishing powder and other impurities, which will cause the components to be corroded, moldy, and foggy. Will significantly affect the basic performance of the component. | https://en.wikipedia.org/wiki/Photopyroelectric |
Photoreceptor proteins are light-sensitive proteins involved in the sensing and response to light in a variety of organisms. Some examples are rhodopsin in the photoreceptor cells of the vertebrate retina , phytochrome in plants, and bacteriorhodopsin and bacteriophytochromes in some bacteria . They mediate light responses as varied as visual perception , phototropism and phototaxis , as well as responses to light-dark cycles such as circadian rhythm and other photoperiodisms including control of flowering times in plants and mating seasons in animals.
Photoreceptor proteins typically consist of a protein attached to a non-protein chromophore (sometimes referred as photopigment , even so photopigment may also refer to the photoreceptor as a whole). The chromophore reacts to light via photoisomerization or photoreduction , thus initiating a change of the receptor protein which triggers a signal transduction cascade. Chromophores found in photoreceptors include retinal ( retinylidene proteins , for example rhodopsin in animals), [ 1 ] flavin ( flavoproteins , for example cryptochrome in plants and animals) [ 2 ] and bilin ( biliproteins , for example phytochrome in plants). [ 3 ] The plant protein UVR8 is exceptional amongst photoreceptors in that it contains no external chromophore. Instead, UVR8 absorbs light through tryptophan residues within its protein coding sequence . [ 4 ]
All the photoreceptors listed above allow plants to sense light with wavelengths range from 280 nm (UV-B) to 750 nm (far-red light). Plants use light of different wavelengths as environmental cues to both alter their position and to trigger important developmental transitions. [ 7 ] The most prominent wavelength responsible for plant mechanisms is blue light, which can trigger cell elongation, plant orientation, and flowering. [ 8 ] One of the most important processes regulated by photoreceptors is known as photomorphogenesis . When a seed germinates underground in the absence of light, its stem rapidly elongates upwards. When it breaks through the surface of the soil, photoreceptors perceive light. The activated photoreceptors cause a change in developmental program; the plant starts producing chlorophyll and switches to photosynthetic growth. [ 9 ]
(Also see: Eyespot apparatus ) | https://en.wikipedia.org/wiki/Photoreceptor_protein |
Photoredox catalysis is a branch of photochemistry that uses single-electron transfer . Photoredox catalysts are generally drawn from three classes of materials: transition-metal complexes, organic dyes, and semiconductors . While organic photoredox catalysts were dominant throughout the 1990s and early 2000s, [ 1 ] soluble transition-metal complexes are more commonly used today.
Sensitizers absorb light to give redox-active excited states. For many metal-based sensitizers, excitation is realized as a metal-to-ligand charge transfer , whereby an electron moves from the metal (e.g., a d orbital) to an orbital localized on the ligands (e.g. the π* orbital of an aromatic ligand). This initial excited electronic state relaxes to a singlet excited state through internal conversion , a process where energy is dissipated as vibrational energy (heat) rather than as electromagnetic radiation. This singlet excited state can relax further by two distinct processes: the catalyst may fluoresce , radiating a photon and returning to the original singlet ground state, or it can move to the lowest energy triplet excited state (a state where two unpaired electrons have the same spin) by a second non-radiative process termed intersystem crossing .
Direct relaxation of the excited triplet to the ground state, termed phosphorescence , requires both emission of a photon and inversion of the spin of the excited electron. This pathway is slow because it is spin-forbidden so the triplet excited state has a substantial average lifetime. For the common photosensitizer, tris-(2,2’-bipyridyl)ruthenium (abbreviated as [Ru(bipy) 3 ] 2+ or [Ru(bpy) 3 ] 2+ ), the lifetime of the triplet excited state is approximately 1100 ns. This lifetime is sufficient for other relaxation pathways (specifically, electron-transfer pathways) to occur before decay of the catalyst to its ground state.
The long-lived triplet excited state accessible by photoexcitation is both a more potent reducing agent and a more potent oxidizing agent than the ground state of the catalyst. Since sensitizer is coordinatively saturated, electron transfer must occur by an outer sphere process, where the electron tunnels between the catalyst and the substrate.
Marcus' theory of outer sphere electron transfer predicts that such a tunneling process will occur most quickly in systems where the electron transfer is thermodynamically favorable (i.e. between strong reductants and oxidants) and where the electron transfer has a low intrinsic barrier.
The intrinsic barrier of electron transfer derives from the Franck–Condon principle , stating that electronic transition takes place more quickly given greater overlap between the initial and final electronic states. Interpreted loosely, this principle suggests that the barrier of an electronic transition is related to the degree to which the system seeks to reorganize. For an electronic transition with a system, the barrier is related to the "overlap" between the initial and final wave functions of the excited electron–i.e. the degree to which the electron needs to "move" in the transition.
In an intermolecular electron transfer, a similar role is played by the degree to which the nuclei seek to move in response to the change in their new electronic environment. Immediately after electron transfer, the nuclear arrangement of the molecule, previously an equilibrium, now represents a vibrationally excited state and must relax to its new equilibrium geometry. Rigid systems, whose geometry is not greatly dependent on oxidation state, therefore experience less vibrational excitation during electron transfer, and have a lower intrinsic barrier. Photocatalysts such as [Ru(bipy) 3 ] 2+ , are held in a rigid arrangement by flat, bidentate ligands arranged in an octahedral geometry around the metal center. Therefore, the complex does not undergo much reorganization during electron transfer. Since electron transfer of these complexes is fast, it is likely to take place within the duration of the catalyst's active state, i.e. during the lifetime of the triplet excited state.
To regenerate the ground state, the catalyst must participate in a second outer-sphere electron transfer. In many cases, this electron transfer takes place with a stoichiometric two-electron reductant or oxidant, although in some cases this step involves a second reagent.
Since the electron transfer step of the catalytic cycle takes place from the triplet excited state, it competes with phosphorescence as a relaxation pathway. Stern–Volmer experiments measure the intensity of phosphorescence while varying the concentration of each possible quenching agent. When the concentration of the actual quenching agent is varied, the rate of electron transfer and the degree of phosphorescence is affected. This relationship is modeled by the equation:
Here, I and I 0 denote the emission intensity with and without quenching agent present, k q the rate constant of the quenching process, τ 0 the excited-state lifetime in the absence of quenching agent and [Q] the concentration of quenching agent. Thus, if the excited-state lifetime of the photoredox catalyst is known from other experiments, the rate constant of quenching in the presence of a single reaction component can be determined by measuring the change in emission intensity as the concentration of quenching agent changes.
The redox potentials of photoredox catalysts must be matched to the reaction's other components. While ground state redox potentials are easily measured by cyclic voltammetry or other electrochemical methods, measuring the redox potential of an electronically excited state cannot be accomplished directly by these methods. [ 2 ] However, two methods exist that allow estimation of the excited-state redox potentials and one method exists for the direct measurement of these potentials. To estimate the excited-state redox potentials, one method is to compare the rates of electron transfer from the excited state to a series of ground-state reactants whose redox potentials are known. A more common method to estimate these potentials is to use an equation developed by Rehm and Weller that describes the excited-state potentials as a correction of the ground-state potentials:
In these formulas, E* 1/2 represents the reduction or oxidation potential of the excited state, E 1/2 represents the reduction or oxidation potential of the ground state, E 0,0 represents the difference in energy between the zeroth vibrational states of the ground and excited states and w r represents the work function , an electrostatic interaction that arises due to the separation of charges that occurs during electron-transfer between two chemical species. The zero-zero excitation energy, E 0,0 is usually approximated by the corresponding transition in the fluorescence spectrum. This method allows calculation of approximate excited-state redox potentials from more easily measured ground-state redox potentials and spectroscopic data.
Direct measurement of the excited-state redox potentials is possible by applying a method known as phase-modulated voltammetry . This method works by shining light onto an electrochemical cell in order to generate the desired excited-state species, but to modulate the intensity of the light sinusoidally , so that the concentration of the excited-state species is not constant. In fact, the concentration of excited-state species in the cell should change exactly in phase with the intensity of light incident on the electrochemical cell. If the potential applied to the cell is strong enough for electron transfer to occur, the change in concentration of the redox-competent excited state can be measured as an alternating current (AC). Furthermore, the phase shift of the AC current relative to the intensity of the incident light corresponds to the average lifetime of an excited-state species before it engages in electron transfer.
Charts of redox potentials for the most common photoredox catalysts are available for quick access. [ 3 ]
The relative reducing and oxidizing natures of these photocatalysts can be understood by considering the ligands' electronegativity and the catalyst complex's metal center. More electronegative metals and ligands can stabilize electrons better than their less electronegative counterparts. Therefore, complexes with more electronegative ligands are more oxidizing than less electronegative ligand complexes. For example, the ligands 2,2'-bipyridine and 2,2'-phenylpyridine are isoelectronic structures, containing the same number and arrangement of electrons. Phenylpyridine replaces one of the nitrogen atoms in bipyridine with a carbon atom. Carbon is less electronegative than nitrogen is, so it holds electrons less tightly. Since the remainder of the ligand molecule is identical and phenylpyridine holds electrons less tightly than bipyridine, it is more strongly electron-donating and less electronegative as a ligand. Hence, complexes with phenylpyridine ligands are more strongly reducing and less strongly oxidizing than equivalent complexes with bipyridine ligands.
Similarly, a fluorinated phenylpyridine ligand is more electronegative than phenylpyridine so complexes with fluorine-containing ligands are more strongly oxidizing and less strongly reducing than equivalent unsubstituted phenylpyridine complexes. The metal center's electronic influence on the complex is more complex than the ligand effect. According to the Pauling scale of electronegativity, both ruthenium and iridium have an electronegativity of 2.2. If this was the sole factor relevant to redox potentials, then complexes of ruthenium and iridium with the same ligands should be equally powerful photoredox catalysts. However, considering the Rehm-Weller equation, the spectroscopic properties of the metal play a role in determining the redox properties of the excited state. [ 4 ] In particular, the parameter E 0,0 is related to the emission wavelength of the complex and therefore, to the size of the Stokes shift - the difference in energy between the maximum absorption and emission of a molecule. Typically, ruthenium complexes have large Stokes shifts and hence, low energy emission wavelengths and small zero-zero excitation energies when compared to iridium complexes. In effect, while ground-state ruthenium complexes can be potent reductants, the excited-state complex is a far less potent reductant or oxidant than its equivalent iridium complex. This makes iridium preferred for the development of general organic transformations because the stronger redox potentials of the excited catalyst allows the use of weaker stoichiometric reductants and oxidants or the use of less reactive substrates. [ 4 ]
Counter-ion identity
It is often the case that these photocatalysts are balanced with a counter-ion, as is the case with the example complex tris-(2,2’-bipyridyl)ruthenium which is accompanied by two anions to balance the overall charge of the ion pair to zero. However, there are transition metal photoredox catalysts that exist without a counter-ion such as tris(2-phenylpyridine)iridium (often abbreviated Ir(ppy) 3 ). The significance of these counter-ions are dependent on the ion association between the photoredox catalyst and its counter-ion(s) and is dependent on the solvent used for the reaction. Although photophysical properties such as redox potential, excitation energy, and ligand electronegative have often been considered key parameters for the use and reactivity of these complexes, counter-ion identity has been shown to play a significant role in low- polarity solvents. [ 5 ] [ 6 ] Particularly, it has been shown that having a tightly associated counter-ion increases the rate of electron-transfer when reducing a substrate but significantly reduces the rate of electron-transfer when oxidizing a substrate. This is believed to occur because the counter-ion essentially "blocks" the electron-transfer into the photoredox complex by shielding the more positively charged region of the complex; whereas, having the tight counter-ion association pushes the electron density further from the photoredox catalyst's metal-center, making it easier to be transferred from the catalyst (of course this only applies to the case where the photoredox catalyst is a cation and the counter-ion is an anion ). Counter-ion identity thus is an additional parameter to consider when developing new photoredox reactions.
The earliest application of photoredox catalysis to Reductive dehalogenation were limited by narrow substrate scope or competing reductive coupling. [ 7 ]
Unactivated carbon-iodine bonds can be reduced using the strongly reducing photocatalyst tris-(2,2’- phenylpyridine )iridium (Ir(ppy) 3 ). [ 8 ] The increased reduction potential of Ir(ppy) 3 compared to [Ru(bipy) 3 ] 2+ allows direct reduction of the carbon-iodine bond without interacting with a stoichiometric reductant. Thus, the iridium complex transfers an electron to the substrate, causing fragmentation of the substrate and oxidizing the catalyst to the Ir(IV) oxidation state. The oxidized photocatalyst is returned to its original oxidation state by oxidising a reaction additives.
Like tin-mediated radical dehalogenation reactions, photocatalytic reductive dehalogenation can be used to initiate cascade cyclizations [ 9 ]
Iminium ions are potent electrophiles useful for generating C-C bonds in complex molecules. However, the condensation of amines with carbonyl compounds to form iminium ions is often unfavorable, sometimes requiring harsh dehydrating conditions. Thus, alternative methods for iminium ion generation, particularly by oxidation from the corresponding amine, are a valuable synthesis tool. Iminium ions can be generated from activated amines using Ir(dtbbpy)(ppy) 2 PF 6 as a photoredox catalyst. [ 10 ] This transformation is proposed to occur by oxidation of the amine to the aminium radical cation by the excited photocatalyst. This is followed by hydrogen atom transfer to a superstoichimetric oxidant, such as trichloromethyl radical (CCl 3 to form the iminium ion). The iminium ion is then quenched by reaction with a nucleophile. Related transformations of amines with a wide variety of other nucleophiles have been investigated, such as cyanide ( Strecker reaction ), silyl enol ethers ( Mannich reaction ), dialkylphosphates, allyl silanes (aza- Sakurai reaction ), indoles ( Friedel-Crafts reaction ), and copper acetylides. [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ]
Similar photoredox generation of iminium ions has furthermore been achieved using purely organic photoredox catalysts, such as Rose Bengal and Eosin Y . [ 16 ] [ 17 ] [ 18 ]
An asymmetric variant of this reaction utilizes acyl nucleophile equivalents generated by N-heterocyclic carbene catalysis. [ 19 ] This reaction method sidesteps the problem of poor enantioinduction from chiral photoredox catalysts by moving the source of enantioselectivity to the N-heterocyclic carbene.
The development of orthogonal protecting groups is a problem in organic synthesis because these protecting groups allow each instance of a common functional group, such as the hydroxyl group, to be distinguished during the synthesis of a complex molecule. A very common protecting group for the hydroxyl functional group is the para -methoxy benzyl (PMB) ether. This protecting group is chemically similar to the less electron-rich benzyl ether. Typically, selective cleavage of a PMB ether in the presence of a benzyl ether uses strong stoichiometric oxidants such as 2,3-dichloro-5,6-dicyano-1,4-benzoquinone (DDQ) or ceric ammonium nitrate (CAN). PMB ethers are far more susceptible to oxidation than benzyl ethers since they are more electron-rich. The selective deprotection of PMB ethers can be achieved through the use of bis-(2-(2',4'-difluorophenyl)-5-trifluoromethylpyridine)-(4,4'-ditertbutylbipyridine)iridium(III) hexafluorophosphate (Ir[dF(CF 3 )ppy] 2 (dtbbpy)PF 6 ) and a mild stoichiometric oxidant such as bromotrichloromethane, BrCCl 3 . [ 20 ] The photoexcited iridium catalyst is reducing enough to fragment the bromotrichloromethane to form a trichloromethyl radical, bromide anion, and the Ir(IV) complex. The electron-poor fluorinated ligands makes the iridium complex oxidising enough to accept an electron from an electron-rich arene such as a PMB ether. After the arene is oxidized, it will readily participate in hydrogen atom transfer with trichloromethyl radical to form chloroform and an oxocarbenium ion, which is readily hydrolyzed to reveal the free hydroxide. This reaction was demonstrated to be orthogonal to many common protecting groups when a base was added to neutralise the HBr produced.
Cycloadditions and other pericyclic reactions are powerful transforms in organic synthesis because of their potential to rapidly generate complex molecular architectures and particularly because of their capacity to set multiple adjacent stereocenters in a highly controlled manner. However, only certain cycloadditions are allowed under thermal conditions according to the Woodward–Hoffmann rules of orbital symmetry, or other equivalent models such as frontier molecular orbital theory (FMO) or the Dewar-Zimmermann model. Cycloadditions that are not thermally allowed, such as the [2+2] cycloaddition, can be enabled by photochemical activation of the reaction. Under uncatalyzed conditions, this activation requires the use of high energy ultraviolet light capable of altering the orbital populations of the reactive compounds. Alternatively, metal catalysts such as cobalt and copper have been reported to catalyze thermally-forbidden [2+2] cycloadditions via single electron transfer.
The required change in orbital populations can be achieved by electron transfer with a photocatalyst sensitive to lower energy visible light. [ 21 ] [ 22 ] [ 23 ] [ 24 ] [ 25 ] Yoon demonstrated the efficient intra- and intermolecular [2+2] cycloadditions of activated olefins : particularly enones and styrenes. Enones, or electron-poor olefins, were discovered to react via a radical-anion pathway, utilizing diisopropylethylamine as a transient source of electrons. For this electron-transfer, [Ru(bipy) 3 ] 2+ was discovered to be an efficient photocatalyst. The anionic nature of the cyclization proved to be crucial: performing the reaction in acid rather than with a lithium counterion favored a non-cycloaddition pathway. [ 26 ] Zhao et al. likewise discovered that a still different cyclization pathway is available to chalcones with a samarium counterion. [ 27 ] Conversely, electron-rich styrenes were found to react via a radical-cation mechanism, utilizing methyl viologen or molecular oxygen as a transient electron sink. While [Ru(bipy) 3 ] 2+ proved to be a competent catalyst for intramolecular cyclizations using methyl viologen , it could not be used with molecular oxygen as an electron sink or for intermolecular cyclizations. For intermolecular cyclizations, Yoon et al. discovered that the more strongly oxidizing photocatalyst [Ru(bpm) 3 ] 2+ and molecular oxygen provided a catalytic system better suited to access the radical cation necessary for the cycloaddition to occur. [Ru(bpz) 3 ] 2+ , a still more strongly oxidizing photocatalyst, proved to be problematic because although it could catalyze the desired [2+2] cycloaddition, it was also strong enough to oxidize the cycloadduct and catalyze the retro-[2+2] reaction. This comparison of photocatalysts highlights the importance of tuning the redox properties of a photocatalyst to the reaction system as well as demonstrating the value of polypyridyl compounds as ligands, due to the ease with which they can be modified to adjust the redox properties of their complexes.
Photoredox-catalyzed [2+2] cycloadditions can also be effected with a triphenylpyrylium organic photoredox catalyst. [ 28 ]
In addition to the thermally-forbidden [2+2] cycloaddition, photoredox catalysis can be applied to the [4+2] cyclization ( Diels–Alder reaction ). Bis-enones, similar to the substrates used for the photoredox [2+2] cyclization, but with a longer linker joining the two enone functional groups, undergo intramolecular radical-anion hetero-Diels–Alder reactions more rapidly than [2+2] cycloaddition. [ 29 ]
Similarly, electron-rich styrenes participate in intra- or intermolecular Diels–Alder cyclizations via a radical cation mechanism. [ 30 ] [ 31 ] [Ru(bipy) 3 ] 2+ was a competent catalyst for intermolecular, but not intramolecular, Diels–Alder cyclizations. This photoredox-catalyzed Diels–Alder reaction allows cycloaddition between two electronically mismatched substrates. The normal electronic demand for the Diels–Alder reaction calls for an electron-rich diene to react with an electron-poor olefin (or "dienophile"), while the inverse electron-demand Diels–Alder reaction takes place between the opposite case of an electron-poor diene and a very electron-rich dienophile. The photoredox case, since it takes place by a different mechanism than the thermal Diels–Alder reaction, allows cycloaddition between an electron-rich diene and an electron-rich dienophile, allowing access to new classes of Diels–Alder adducts.
The synthetic value of Yoon's photoredox-catalyzed styrene Diels–Alder reaction was demonstrated via the total synthesis of the natural product Heitziamide A. [ 30 ] This synthesis demonstrates that the thermal Diels–Alder reaction favors the undesired regioisomer, but the photoredox-catalyzed reaction gives the desired regioisomer in improved yield.
Organocatalysis is a subfield of catalysis that explores the potential of organic small molecules as catalysts, particularly for the enantioselective creation of chiral molecules. One strategy in this subfield is the use of chiral secondary amines to activate carbonyl compounds. In this case, amine condensation with the carbonyl compound generates a nucleophilic enamine . The chiral amine is designed so that one face of the enamine is sterically shielded and so that only the unshielded face is free to react. Despite the power of this approach to catalyze the enantioselective functionalization of carbonyl compounds, certain valuable transformations, such as the catalytic enantioselective α-alkylation of aldehydes , remained elusive. The combination of organocatalysis and photoredox methods provides a catalytic solution to this problem. [ 32 ] In this approach for the α-alkylation of aldehydes, [Ru(bipy) 3 ] 2+ reductively fragments an activated alkyl halide, such as bromomalonate or phenacyl bromide , which can then add to catalytically-generated enamine in an enantioselective manner. The oxidized photocatalyst then oxidatively quenches the resulting α-amino radical to form an iminium ion, which hydrolyzes to give the functionalized carbonyl compound. This photoredox transformation was shown to be mechanistically distinct from another organocatalytic radical process termed singly-occupied molecular orbital (SOMO) catalysis. SOMO catalysis employs superstoichiometric ceric ammonium nitrate (CAN) to oxidize the catalytically-generated enamine to the corresponding radical cation, which can then add to a suitable coupling partner such as allyl silane. This type of mechanism is excluded for the photocatalytic alkylation reaction because whereas enamine radical cation was observed to cyclize onto pendant olefins and open cyclopropane radical clocks in SOMO catalysis, these structures were unreactive in the photoredox reaction.
This transformation include alkylations with other classes of activated alkyl halides of synthetic interest. In particular, the use of the photocatalyst Ir(dtbbpy)(ppy) 2 + allows the enantioselective α-trifluoromethylation of aldehydes while the use of Ir(ppy) 3 allowed the enantioselective coupling of aldehydes with electron-poor benzylic bromides. [ 33 ] [ 34 ] Zeitler et al. also investigated the productive merger of photoredox and organocatalytic methods to achieve enantioselective alkylation of aldehydes. [ 35 ] The same chiral imidazolidinone organocatalyst was used to form enamine and introduce chirality. However, the organic photoredox catalyst Eosin Y was used rather than a ruthenium or iridium complex.
Direct β-arylation of saturated aldehydes and ketones can be effected through the combination of photoredox and organocatalytic methods. [ 36 ] The previous method to accomplish direct β-functionalization of a saturated carbonyl consists of a one-pot consists of a two-step process, both catalyzed by a secondary amine organocatalyst: stoichiometric reduction of an aldehyde with IBX followed by addition of an activated alkyl nucleophile to the beta-position of the resulting enal . [ 37 ] This transformation, which like other photoredox processes takes place by a radical mechanism, is limited to the addition of highly electrophilic arenes to the beta position. The severe limitations on the arene component scope in this reaction is due primarily to the need for an arene radical anion that is stable enough not to react directly with enamine or enamine radical cation. In the proposed mechanism, the activated photoredox catalyst is quenched oxidatively by an electron-deficient arene, such as 1,4-dicyanobenzene . The photocatalyst then oxidizes an enamine species, transiently generated by the condensation of an aldehyde with a secondary amine cocatalyst, such as the optimal isopropyl benzylamine. The resulting enamine radical cation usually reacts as a 3 π-electron system, but due to the stability of the radical coupling partners, deprotonation of the β-methylene position gives rise to a 5 π-electron system with strong radical character at the newly accessed β-carbon. Although this reaction relies on the use of a secondary amine organocatalyst to generate the enamine species which is oxidized in the proposed mechanism, no enantioselective variant of this reaction exists.
The development of this direct β-arylation of aldehydes led to related reactions for the β-functionalization of cyclic ketones. In particular, β-arylation of cyclic ketones has been achieved under similar reaction conditions, but using azepane as the secondary amine cocatalyst. A photocatalytic "homo-aldol" reaction works for cyclic ketones, allowing the coupling of the beta-position of the ketone to the ipso carbon of aryl ketones, such as benzophenone and acetophenone . [ 38 ] In addition to the azepane cocatalyst, this reaction requires the use of the more strongly reducing photoredox catalyst Ir(ppy) 3 and the addition of lithium hexafluoroarsenide (LiAsF 6 ) to promote single-electron reduction of the aryl ketone.
The use of photoredox catalysis to generate reactive heteroatom-centered radicals was first explored in the 1990s. [ 39 ] [Ru(bipy) 3 ] 2+ was found to catalyze the fragmentation of tosylphenylselenide to phenylselenolate anion and tosyl radical and that a radical chain propagation mechanism allowed the addition of tosyl radical and phenylseleno- radical across the double bond of electron rich alkyl vinyl ethers. Since phenylselenolate anion is readily oxidized to diphenyldiselenide, the low quantities of diphenyldiselenide observed was taken as an indication that photoredox-catalyzed fragmentation of tosylphenylselenide was only important as an initiation step, and that most of the reactivity was due to a radical chain process.
Heteroaromatic additions to olefins include multicomponent oxy- and aminotrifluoromethylation reactions. [ 40 ] [ 41 ] These reactions use Umemoto's reagent, a sulfonium salt that serves as an electrophilic source of the trifluoromethyl group and that is precedented to react via a single-electron transfer pathway. Thus, single-electron reduction of Umemoto's reagent releases trifluoromethyl radical, which adds to the reactive olefin. Subsequently, single-electron oxidation of the alkyl radical generated by this addition produces a cation which can be trapped by water, an alcohol, or a nitrile. In order to achieve high levels of regioselectivity, this reactivity has been explored mainly for styrenes, which are biased towards formation of the benzylic radical intermediate.
Hydrotrifluoromethylation of styrenes and aliphatic alkenes can be effected with a mesityl acridinium organic photoredox catalyst and Langlois' reagent as the source of CF 3 radical. [ 42 ] In this reaction, it was found that trifluoroethanol and substoichiometric amounts of an aromatic thiol, such as methyl thiosalicylate, employed in tandem served as the best source of hydrogen radical to complete the catalytic cycle.
Intramolecular hydroetherifications and hydroaminations proceed with anti-Markovnikov selectivity. [ 43 ] [ 44 ] One mechanism invokes the single-electron oxidation of the olefin, trapping the radical cation by a pendant hydroxyl or amine functional group, and quenching the resulting alkyl radical by H-atom transfer from a highly labile donor species. Extensions of this reactivity to intermolecular systems have resulted in i) a new synthetic route to complex tetrahydrofurans by a "polar-radical-crossover cycloaddition" (PRCC reaction) of an allylic alcohol with an olefin, and ii) the anti-Markovnikov addition of carboxylic acids to olefins. [ 45 ] [ 46 ]
Sulfoximidation of electron-rich arenes is enabled by photoredox catalysis. [ 47 ] | https://en.wikipedia.org/wiki/Photoredox_catalysis |
A photoresist (also known simply as a resist ) is a light-sensitive material used in several processes, such as photolithography and photoengraving , to form a patterned coating on a surface. This process is crucial in the electronics industry . [ 1 ]
The process begins by coating a substrate with a light-sensitive organic material. A patterned mask is then applied to the surface to block light, so that only unmasked regions of the material will be exposed to light. A solvent, called a developer, is then applied to the surface.
In the case of a positive photoresist, the photo-sensitive material is degraded by light and the developer will dissolve away the regions that were exposed to light, leaving behind a coating where the mask was placed.
In the case of a negative photoresist, the photosensitive material is strengthened (either polymerized or cross-linked) by light, and the developer will dissolve away only the regions that were not exposed to light, leaving behind a coating in areas where the mask was not placed.
A BARC coating (Bottom Anti-Reflectant Coating) may be applied before the photoresist is applied, to avoid reflections from occurring under the photoresist and to improve the photoresist's performance at smaller semiconductor nodes. [ 2 ] [ 3 ] [ 4 ]
Conventional photoresists typically consist of 3 components: resin (a binder that provides physical properties such as adhesion, chemical resistance, etc), sensitizer (which has a photoactive compound), and solvent (which keeps the resist liquid).
Positive: light will weaken the resist, and create a hole
Negative: light will toughen the resist and create an etch resistant mask.
To explain this in graphical form you may have a graph on Log exposure energy versus fraction of resist thickness remaining. The positive resist will be completely removed at the final exposure energy and the negative resist will be completely hardened and insoluble by the end of exposure energy. The slope of this graph is the contrast ratio. Intensity (I) is related to energy by E = I*t.
A positive photoresist is a type of photoresist in which a portion is exposed to light and becomes soluble to the photoresist developer. The unexposed portion of the photoresist remains insoluble in the photoresist developer.
Some examples of positive photoresists are:
PMMA (polymethylmethacrylate) single-component
Two-component DQN resists:
A negative photoresist is a type of photoresist in which the portion of the photoresist that is exposed to light becomes insoluble in the photoresist developer. The unexposed portion of the photoresist is dissolved by the photoresist developer.
Modulation transfer function
MTF (modulation transfer function is the ratio of image intensity modulation and object intensity modulation and it is a parameter that indicates the capability of an optical system.
The following table [ 6 ] is based on generalizations which are generally accepted in the microelectromechanical systems (MEMS) fabrication industry.
Based on the chemical structure of photoresists, they can be classified into three types: photopolymeric, photodecomposing, and photocrosslinking photoresist.
In lithography, decreasing the wavelength of light source is the most efficient way to achieve higher resolution. [ 8 ] Photoresists are most commonly used at wavelengths in the ultraviolet spectrum or shorter (<400 nm). For example, diazonaphthoquinone (DNQ) absorbs strongly from approximately 300 nm to 450 nm. The absorption bands can be assigned to n-π* (S0–S1) and π-π* (S1–S2) transitions in the DNQ molecule. [ citation needed ] In the deep ultraviolet (DUV) spectrum, the π-π* electronic transition in benzene [ 9 ] or carbon double-bond chromophores appears at around 200 nm. [ citation needed ] Due to the appearance of more possible absorption transitions involving larger energy differences, the absorption tends to increase with shorter wavelength, or larger photon energy . Photons with energies exceeding the ionization potential of the photoresist (can be as low as 5 eV in condensed solutions) [ 10 ] can also release electrons which are capable of additional exposure of the photoresist. From about 5 eV to about 20 eV, photoionization of outer " valence band " electrons is the main absorption mechanism. [ 11 ] Above 20 eV, inner electron ionization and Auger transitions become more important. Photon absorption begins to decrease as the X-ray region is approached, as fewer Auger transitions between deep atomic levels are allowed for the higher photon energy. The absorbed energy can drive further reactions and ultimately dissipates as heat. This is associated with the outgassing and contamination from the photoresist.
Photoresists can also be exposed by electron beams, producing the same results as exposure by light. The main difference is that while photons are absorbed, depositing all their energy at once, electrons deposit their energy gradually, and scatter within the photoresist during this process. As with high-energy wavelengths, many transitions are excited by electron beams, and heating and outgassing are still a concern. The dissociation energy for a C-C bond is 3.6 eV. Secondary electrons generated by primary ionizing radiation have energies sufficient to dissociate this bond, causing scission. In addition, the low-energy electrons have a longer photoresist interaction time due to their lower speed; essentially the electron has to be at rest with respect to the molecule in order to react most strongly via dissociative electron attachment, where the electron comes to rest at the molecule, depositing all its kinetic energy. [ 12 ] The resulting scission breaks the original polymer into segments of lower molecular weight, which are more readily dissolved in a solvent, or else releases other chemical species (acids) which catalyze further scission reactions (see the discussion on chemically amplified resists below). It is not common to select photoresists for electron-beam exposure. Electron beam lithography usually relies on resists dedicated specifically to electron-beam exposure.
Physical, chemical, and optical properties of photoresists influence their selection for different processes. [ 13 ] The primary properties of the photoresist are resolution capability, process dose and focus latitude s required for curing, and resistance to reactive ion etching. [ 14 ] : 966 [ 15 ] Other key properties are sensitivity, compatibility with tetramethylammonium hydroxide (TMAH), adhesion, environmental stability, and shelf life. [ 14 ] : 966 [ 15 ]
Photoresists used in production for DUV and shorter wavelengths require the use of chemical amplification to increase the sensitivity to the exposure energy. This is done in order to combat the larger absorption at shorter wavelengths. Chemical amplification is also often used in electron-beam exposures to increase the sensitivity to the exposure dose. In the process, acids released by the exposure radiation diffuse during the post-exposure bake step. These acids render surrounding polymer soluble in developer. A single acid molecule can catalyze many such ' deprotection ' reactions; hence, fewer photons or electrons are needed. [ 16 ] Acid diffusion is important not only to increase photoresist sensitivity and throughput, but also to limit line edge roughness due to shot noise statistics. [ 17 ] However, the acid diffusion length is itself a potential resolution limiter. [ 18 ] In addition, too much diffusion reduces chemical contrast, leading again to more roughness. [ 17 ]
The following reactions are an example of commercial chemically amplified photoresists in use today:
The e − represents a solvated electron , or a freed electron that may react with other constituents of the solution. It typically travels a distance on the order of many nanometers before being contained; [ 21 ] [ 22 ] such a large travel distance is consistent with the release of electrons through thick oxide in UV EPROM in response to ultraviolet light. This parasitic exposure would degrade the resolution of the photoresist; for 193 nm the optical resolution is the limiting factor anyway, but for electron beam lithography or EUVL it is the electron range that determines the resolution rather than the optics.
One very common positive photoresist used with the I, G and H-lines from a mercury-vapor lamp is based on a mixture of diazonaphthoquinone (DNQ) and novolac resin (a phenol formaldehyde resin). DNQ inhibits the dissolution of the novolac resin, but upon exposure to light, the dissolution rate increases even beyond that of pure novolac. The mechanism by which unexposed DNQ inhibits novolac dissolution is not well understood, but is believed to be related to hydrogen bonding (or more exactly diazocoupling in the unexposed region). DNQ-novolac resists are developed by dissolution in a basic solution (usually 0.26N tetramethylammonium hydroxide (TMAH) in water).
One very common negative photoresist is based on epoxy-based oligomer. The common product name is SU-8 photoresist , and it was originally invented by IBM , but is now sold by Microchem and Gersteltec . One unique property of SU-8 is that it is very difficult to strip. As such, it is often used in applications where a permanent resist pattern (one that is not strippable, and can even be used in harsh temperature and pressure environments) is needed for a device. [ 23 ] Mechanism of epoxy-based polymer is shown in 1.2.3 SU-8. SU-8 is prone to swelling at smaller feature sizes, which has led to the development of small-molecule alternatives that are capable of obtaining higher resolutions than SU-8. [ 24 ]
In 2016, OSTE Polymers were shown to possess a unique photolithography mechanism, based on diffusion-induced monomer depletion, which enables high photostructuring accuracy. The OSTE polymer material was originally invented at the KTH Royal Institute of Technology , but is now sold by Mercene Labs . Whereas the material has properties similar to those of SU8, OSTE has the specific advantage that it contains reactive surface molecules, which make this material attractive for microfluidic or biomedical applications. [ 13 ]
HSQ is a common negative resist for e-beam , but also useful for photolithography. Originally invented by Dow Corning (1970), [ 25 ] and now produced ( 2017 ) by Applied Quantum Materials Inc. ( AQM ). Unlike other negative resists, HSQ is inorganic and metal-free. Therefore, exposed HSQ provides a low dielectric constant (low-k) Si-rich oxide. A comparative study against other photoresists was reported in 2015 (Dow Corning HSQ). [ 26 ]
Microcontact printing was described by Whitesides Group in 1993. Generally, in this techniques, an elastomeric stamp is used to generate two-dimensional patterns, through printing the “ink” molecules onto the surface of a solid substrate. [ 27 ]
Step 1 for microcontact printing. A scheme for the creation of a polydimethylsiloxane (PDMS) master stamp. Step 2 for microcontact printing A scheme of the inking and contact process of microprinting lithography.
The manufacture of printed circuit boards is one of the most important uses of photoresist. Photolithography allows the complex wiring of an electronic system to be rapidly, economically, and accurately reproduced as if run off a printing press. The general process is applying photoresist, exposing image to ultraviolet rays, and then etching to remove the copper-clad substrate. [ 28 ]
This includes specialty photonics materials, MicroElectro-Mechanical Systems ( MEMS ), glass printed circuit boards, and other micropatterning tasks. Photoresist tends not to be etched by solutions with a pH greater than 3. [ 29 ]
This application, mainly applied to silicon wafers and silicon integrated circuits is the most developed of the technologies and the most specialized in the field. [ 30 ] | https://en.wikipedia.org/wiki/Photoresist |
Photorespiration (also known as the oxidative photosynthetic carbon cycle or C 2 cycle ) refers to a process in plant metabolism where the enzyme RuBisCO oxygenates RuBP , wasting some of the energy produced by photosynthesis. The desired reaction is the addition of carbon dioxide to RuBP ( carboxylation ), a key step in the Calvin–Benson cycle , but approximately 25% of reactions by RuBisCO instead add oxygen to RuBP ( oxygenation ), creating a product that cannot be used within the Calvin–Benson cycle. This process lowers the efficiency of photosynthesis, potentially lowering photosynthetic output by 25% in C 3 plants . [ 1 ] Photorespiration involves a complex network of enzyme reactions that exchange metabolites between chloroplasts , leaf peroxisomes and mitochondria .
The oxygenation reaction of RuBisCO is a wasteful process because 3-phosphoglycerate is created at a lower rate and higher metabolic cost compared with RuBP carboxylase activity . While photorespiratory carbon cycling results in the formation of G3P eventually, around 25% of carbon fixed by photorespiration is re-released as CO 2 [ 2 ] and nitrogen, as ammonia . Ammonia must then be detoxified at a substantial cost to the cell. Photorespiration also incurs a direct cost of one ATP and one NAD(P)H .
While it is common to refer to the entire process as photorespiration, technically the term refers only to the metabolic network which acts to rescue the products of the oxygenation reaction (phosphoglycolate).
Addition of molecular oxygen to ribulose-1,5-bisphosphate produces 3-phosphoglycerate (PGA) and 2-phosphoglycolate (2PG, or PG). PGA is the normal product of carboxylation, and productively enters the Calvin cycle . Phosphoglycolate, however, inhibits certain enzymes involved in photosynthetic carbon fixation (hence is often said to be an 'inhibitor of photosynthesis'). [ 3 ] It is also relatively difficult to recycle: in higher plants it is salvaged by a series of reactions in the peroxisome , mitochondria , and again in the peroxisome where it is converted into glycerate . Glycerate reenters the chloroplast and by the same transporter that exports glycolate . A cost of 1 ATP is associated with conversion to 3-phosphoglycerate (PGA) ( Phosphorylation ), within the chloroplast , which is then free to re-enter the Calvin cycle.
Several costs are associated with this metabolic pathway; the production of hydrogen peroxide in the peroxisome (associated with the conversion of glycolate to glyoxylate). Hydrogen peroxide is a dangerously strong oxidant which must be immediately split into water and oxygen by the enzyme catalase . The conversion of 2× 2Carbon glycine to 1× C 3 serine in the mitochondria by the enzyme glycine-decarboxylase is a key step, which releases CO 2 , NH 3 , and reduces NAD to NADH. Thus, one CO 2 molecule is produced for every two molecules of O 2 (two deriving from RuBisCO and one from peroxisomal oxidations). The assimilation of NH 3 occurs via the GS - GOGAT cycle, at a cost of one ATP and one NADPH.
Cyanobacteria have three possible pathways through which they can metabolise 2-phosphoglycolate. They are unable to grow if all three pathways are knocked out, despite having a carbon concentrating mechanism that should dramatically lower the rate of photorespiration (see below) . [ 4 ]
The oxidative photosynthetic carbon cycle reaction is catalyzed by RuBP oxygenase activity:
During the catalysis by RuBisCO, an 'activated' intermediate is formed (an enediol intermediate) in the RuBisCO active site. This intermediate is able to react with either CO 2 or O 2 . It has been demonstrated that the specific shape of the RuBisCO active site acts to encourage reactions with CO 2 . Although there is a significant "failure" rate (~25% of reactions are oxygenation rather than carboxylation), this represents significant favouring of CO 2 , when the relative abundance of the two gases is taken into account: in the current atmosphere, O 2 is approximately 500 times more abundant, and in solution O 2 is 25 times more abundant than CO 2 . [ 5 ]
The ability of RuBisCO to specify between the two gases is known as its selectivity factor (or Srel), and it varies between species, [ 5 ] with angiosperms more efficient than other plants, but with little variation among the vascular plants . [ 6 ]
A suggested explanation of RuBisCO's inability to discriminate completely between CO 2 and O 2 is that it is an evolutionary relic: [ citation needed ] The early atmosphere in which primitive plants originated contained very little oxygen, the early evolution of RuBisCO was not influenced by its ability to discriminate between O 2 and CO 2 . [ 6 ]
Photorespiration rates are affected by:
Factors which influence this include the atmospheric abundance of the two gases, the supply of the gases to the site of fixation (i.e. in land plants: whether the stomata are open or closed), the length of the liquid phase (how far these gases have to diffuse through water in order to reach the reaction site). For example, when the stomata are closed to prevent water loss during drought : this limits the CO 2 supply, while O 2 production within the leaf will continue. In algae (and plants which photosynthesise underwater); gases have to diffuse significant distances through water, which results in a decrease in the availability of CO 2 relative to O 2 . It has been predicted that the increase in ambient CO 2 concentrations predicted over the next 100 years may lower the rate of photorespiration in most plants by around 50% [ citation needed ] . However, at temperatures higher than the photosynthetic thermal optimum, the increases in turnover rate are not translated into increased CO 2 assimilation because of the decreased affinity of Rubisco for CO 2 . [ 7 ]
At higher temperatures RuBisCO is less able to discriminate between CO 2 and O 2 . This is because the enediol intermediate is less stable. Increasing temperatures also lower the solubility of CO 2 , thus lowering the concentration of CO 2 relative to O 2 in the chloroplast .
The vast majority of plants are C3, meaning they photorespire when necessary. Certain species of plants or algae have mechanisms to lower the uptake of molecular oxygen by RuBisCO. These are commonly referred to as Carbon Concentrating Mechanisms (CCMs), as they increase the concentration of CO 2 so that RuBisCO is less likely to produce glycolate through reaction with O 2 .
Biochemical CCMs concentrate carbon dioxide in one temporal or spatial region, through metabolite exchange. C 4 and CAM photosynthesis both use the enzyme Phosphoenolpyruvate carboxylase (PEPC) to add CO 2 to a 4-carbon sugar. PEPC is faster than RuBisCO, and more selective for CO 2 .
C 4 plants capture carbon dioxide in their mesophyll cells (using an enzyme called phosphoenolpyruvate carboxylase which catalyzes the combination of carbon dioxide with a compound called phosphoenolpyruvate (PEP)), forming oxaloacetate. This oxaloacetate is then converted to malate and is transported into the bundle sheath cells (site of carbon dioxide fixation by RuBisCO) where oxygen concentration is low to avoid photorespiration. Here, carbon dioxide is removed from the malate and combined with RuBP by RuBisCO in the usual way, and the Calvin cycle proceeds as normal. The CO 2 concentrations in the Bundle Sheath are approximately 10–20 fold higher than the concentration in the mesophyll cells. [ 6 ]
This ability to avoid photorespiration makes these plants more hardy than other plants in dry and hot environments, wherein stomata are closed and internal carbon dioxide levels are low. Under these conditions, photorespiration does occur in C 4 plants, but at a much lower level compared with C 3 plants in the same conditions. C 4 plants include sugar cane , corn (maize) , and sorghum .
CAM plants, such as cacti and succulent plants , also use the enzyme PEP carboxylase to capture carbon dioxide, but only at night. Crassulacean acid metabolism allows plants to conduct most of their gas exchange in the cooler night-time air, sequestering carbon in 4-carbon sugars which can be released to the photosynthesizing cells during the day. This allows CAM plants to minimize water loss ( transpiration ) by maintaining closed stomata during the day. CAM plants usually display other water-saving characteristics, such as thick cuticles, stomata with small apertures, and typically lose around 1/3 of the amount of water per CO 2 fixed. [ 8 ]
C 2 photosynthesis (also called glycine shuttle and photorespiratory CO 2 pump ) is a CCM that works by making use of – as opposed to avoiding – photorespiration. It performs carbon refixation by delaying the breakdown of photorespired glycine, so that the molecule is shuttled from the mesophyll into the bundle sheath . Once there, the glycine is decarboxylated in mitochondria as usual, releasing CO 2 and concentrating it to triple the usual concentration. [ 9 ]
Although C 2 photosynthesis is traditionally understood as an intermediate step between C 3 and C 4 , a wide variety of plant lineages do end up in the C 2 stage without further evolving, showing that it is an evolutionary steady state of its own. C 2 may be easier to engineer into crops, as the phenotype requires fewer anatomical changes to produce. [ 9 ]
There have been some reports of algae operating a biochemical CCM: shuttling metabolites within single cells to concentrate CO 2 in one area. This process is not fully understood. [ 10 ]
This type of carbon-concentrating mechanism (CCM) relies on a contained compartment within the cell into which CO 2 is shuttled, and where RuBisCO is highly expressed. In many species, biophysical CCMs are only induced under low carbon dioxide concentrations. Biophysical CCMs are more evolutionary ancient than biochemical CCMs. There is some debate as to when biophysical CCMs first evolved, but it is likely to have been during a period of low carbon dioxide, after the Great Oxygenation Event (2.4 billion years ago). Low CO 2 periods occurred around 750, 650, and 320–270 million years ago. [ 11 ]
In nearly all species of eukaryotic algae ( Chloromonas being one notable exception), upon induction of the CCM, ~95% of RuBisCO is densely packed into a single subcellular compartment: the pyrenoid . Carbon dioxide is concentrated in this compartment using a combination of CO 2 pumps, bicarbonate pumps, and carbonic anhydrases . The pyrenoid is not a membrane-bound compartment but is found within the chloroplast, often surrounded by a starch sheath (which is not thought to serve a function in the CCM). [ 12 ]
Certain species of hornwort are the only land plants that are known to have a biophysical CCM involving concentration of carbon dioxide within pyrenoids in their chloroplasts. [ 13 ]
Cyanobacterial CCMs are similar in principle to those found in eukaryotic algae and hornworts, but the compartment into which carbon dioxide is concentrated has several structural differences. Instead of the pyrenoid, cyanobacteria contain carboxysomes , which have a protein shell, and linker proteins packing RuBisCO inside with a very regular structure.
Cyanobacterial CCMs are much better understood than those found in eukaryotes , partly due to the ease of genetic manipulation of prokaryotes .
Lowering photorespiration may not result in increased growth rates for plants. Photorespiration may be necessary for the assimilation of nitrate from soil. Thus, a lowering in photorespiration by genetic engineering or because of increasing atmospheric carbon dioxide may not benefit plants as has been proposed. [ 14 ] Several physiological processes may be responsible for linking photorespiration and nitrogen assimilation. Photorespiration increases availability of NADH, which is required for the conversion of nitrate to nitrite . Certain nitrite transporters also transport bicarbonate , and elevated CO 2 has been shown to suppress nitrite transport into chloroplasts. [ 15 ] However, in an agricultural setting, replacing the native photorespiration pathway with an engineered synthetic pathway to metabolize glycolate in the chloroplast resulted in a 40 percent increase in crop growth. [ 16 ] [ 17 ] [ 18 ]
Although photorespiration is much lower in C 4 species, it is still an essential pathway – mutants without functioning 2-phosphoglycolate metabolism cannot grow in normal conditions. One mutant was shown to rapidly accumulate glycolate. [ 19 ]
Although the functions of photorespiration remain controversial, [ 20 ] it is widely accepted that this pathway influences a wide range of processes from bioenergetics, photosystem II function, and carbon metabolism to nitrogen assimilation and respiration. The oxygenase reaction of RuBisCO may prevent CO 2 depletion near its active sites [ 21 ] and contributes to the regulation of CO 2. concentration in the atmosphere [ 22 ] The photorespiratory pathway is a major source of hydrogen peroxide ( H 2 O 2 ) in photosynthetic cells. Through H 2 O 2 production and pyrimidine nucleotide interactions, photorespiration makes a key contribution to cellular redox homeostasis. In so doing, it influences multiple signalling pathways, in particular, those that govern plant hormonal responses controlling growth, environmental and defense responses, and programmed cell death. [ 20 ]
It has been postulated that photorespiration may function as a "safety valve", [ 23 ] preventing the excess of reductive potential coming from an overreduced NADPH -pool from reacting with oxygen and producing free radicals (oxidants), as these can damage the metabolic functions of the cell by subsequent oxidation of membrane lipids, proteins or nucleotides. The mutants deficient in photorespiratory enzymes are characterized by a high redox level in the cell, [ 24 ] impaired stomatal regulation, [ 25 ] and accumulation of formate . [ 26 ] | https://en.wikipedia.org/wiki/Photorespiration |
Photosensitive glass , also called photostructurable glass ( PSG ) or photomachinable glass , is a glass in the lithium - silicate family of glasses onto which images can be etched using shortwave radiations , such as ultraviolet . [ 1 ] Photosensitive glass was first discovered by S. Donald Stookey in 1937. [ 2 ] [ 3 ] [ 4 ]
When the glass is exposed to UV light with wavelengths between 280 and 320 nm, a latent image is formed. The glass remains transparent at this stage, but its ability to absorb UV light increases. This increased absorption is only detectable using UV transmission spectroscopy and is caused by an oxidation–reduction reaction that occurs inside the glass during exposure. This reaction causes cerium ions to oxidize to a more stable state, and silver ions are reduced to silver. [ 5 ]
The latent image captured in the glass is made visible by heating. [ 6 ] [ 2 ] [ 4 ] This heat treatment is done by raising the temperature to about 500 °C to allow the oxidation–reduction reaction to form silver nanoclusters. Following this, the temperature is raised to 550–560 °C, and lithium metasilicate (Li 2 SiO 3 ) forms on the silver nanoclusters. This material forms in the crystalline phase. [ 6 ]
The lithium metasilicate in the exposed regions of the glass can be etched by hydrofluoric acid (HF). This forms glass microstructures with a roughness in the range of 5 μm, resulting in a three-dimensional image of the mask to be produced. [ 1 ] to 0.7 μm. [ 6 ] [ clarify ] | https://en.wikipedia.org/wiki/Photosensitive_glass |
Photosensitizers are light absorbers that alter the course of a photochemical reaction . They usually are catalysts . [ 1 ] They can function by many mechanisms; sometimes they donate an electron to the substrate, and sometimes they abstract a hydrogen atom from the substrate. At the end of this process, the photosensitizer returns to its ground state , where it remains chemically intact, poised to absorb more light. [ 2 ] [ 3 ] [ 4 ] One branch of chemistry which frequently utilizes photosensitizers is polymer chemistry , using photosensitizers in reactions such as photopolymerization , photocrosslinking, and photodegradation . [ 5 ] Photosensitizers are also used to generate prolonged excited electronic states in organic molecules with uses in photocatalysis , photon upconversion and photodynamic therapy . Generally, photosensitizers absorb electromagnetic radiation consisting of infrared radiation , visible light radiation , and ultraviolet radiation and transfer absorbed energy into neighboring molecules. This absorption of light is made possible by photosensitizers' large de-localized π-systems , which lowers the energy of HOMO and LUMO orbitals to promote photoexcitation . While many photosensitizers are organic or organometallic compounds, there are also examples of using semiconductor quantum dots as photosensitizers. [ 6 ]
Photosensitizers absorb light (hν) and transfer the energy from the incident light into another nearby molecule either directly or by a chemical reaction. Upon absorbing photons of radiation from incident light, photosensitizers transform into an excited singlet state . The single electron in the excited singlet state then flips in its intrinsic spin state via Intersystem crossing to become an excited triplet state . Triplet states typically have longer lifetimes than excited singlets. The prolonged lifetime increases the probability of interacting with other molecules nearby. Photosensitizers experience varying levels of efficiency for intersystem crossing at different wavelengths of light based on the internal electronic structure of the molecule. [ 2 ] [ 7 ]
For a molecule to be considered a photosensitizer:
It is important to differentiate photosensitizers from other photochemical interactions including, but not limited to, photoinitiators , photocatalysts , photoacids and photopolymerization . Photosensitizers utilize light to enact a chemical change in a substrate; after the chemical change, the photosensitizer returns to its initial state, remaining chemically unchanged from the process. Photoinitiators absorb light to become a reactive species, commonly a radical or an ion , where it then reacts with another chemical species. These photoinitiators are often completely chemically changed after their reaction. Photocatalysts accelerate chemical reactions which rely upon light. While some photosensitizers may act as photocatalysts, not all photocatalysts may act as photosensitizers. Photoacids (or photobases) are molecules which become more acidic (or basic) upon the absorption of light. Photoacids increase in acidity upon absorbing light and thermally reassociate back into their original form upon relaxing. Photoacid generators undergo an irreversible change to become an acidic species upon light absorption. Photopolymerization can occur in two ways. Photopolymerization can occur directly wherein the monomers absorb the incident light and begin polymerizing, or it can occur through a photosensitizer-mediated process where the photosensitizer absorbs the light first before transferring energy into the monomer species. [ 8 ] [ 9 ]
Photosensitizers have existed within natural systems for as long as chlorophyll and other light sensitive molecules have been a part of plant life, but studies of photosensitizers began as early as the 1900s, where scientists observed photosensitization in biological substrates and in the treatment of cancer. Mechanistic studies related to photosensitizers began with scientists analyzing the results of chemical reactions where photosensitizers photo-oxidized molecular oxygen into peroxide species. The results were understood by calculating quantum efficiencies and fluorescent yields at varying wavelengths of light and comparing these results with the yield of reactive oxygen species . However, it was not until the 1960s that the electron donating mechanism was confirmed through various spectroscopic methods including reaction-intermediate studies and luminescence studies. [ 8 ] [ 10 ] [ 11 ]
The term photosensitizer does not appear in scientific literature until the 1960s. Instead, scientists would refer to photosensitizers as sensitizers used in photo-oxidation or photo-oxygenation processes. Studies during this time period involving photosensitizers utilized organic photosensitizers, consisting of aromatic hydrocarbon molecules, which could facilitate synthetic chemistry reactions. However, by the 1970s and 1980s, photosensitizers gained attraction in the scientific community for their role within biologic processes and enzymatic processes. [ 12 ] [ 13 ] Currently, photosensitizers are studied for their contributions to fields such as energy harvesting, photoredox catalysis in synthetic chemistry, and cancer treatment. [ 11 ] [ 14 ]
There are two main pathways for photosensitized reactions. [ 15 ]
In Type I photosensitized reactions, the photosensitizer is excited by a light source into a triplet state. The excited, triplet state photosensitizer then reacts with a substrate molecule which is not molecular oxygen to both form a product and reform the photosensitizer. Type I photosensitized reactions result in the photosensitizer being quenched by a different chemical substrate than molecular oxygen. [ 2 ] [ 16 ]
In Type II photosensitized reactions, the photosensitizer is excited by a light source into a triplet state. The excited photosensitizer then reacts with a ground state, triplet oxygen molecule. This excites the oxygen molecule into the singlet state, making it a reactive oxygen species . Upon excitation, the singlet oxygen molecule reacts with a substrate to form a product. Type II photosensitized reaction result in the photosensitizer being quenched by a ground state oxygen molecule which then goes on to react with a substrate to form a product. [ 2 ] [ 17 ] [ 18 ] [ 19 ]
Photosensitizers can be placed into 3 generalized domains based on their molecular structure. These three domains are organometallic photosensitizers, organic photosensitizers, and nanomaterial photosensitizers.
Organometallic photosensitizers contain a metal atom chelated to at least one organic ligand . The photosensitizing capacities of these molecules result from electronic interactions between the metal and ligand(s). Popular electron-rich metal centers for these complexes include Iridium , Ruthenium , and Rhodium . These metals, as well as others, are common metal centers for photosensitizers due to their highly filled d-orbitals , or high d-electron counts, to promote metal to ligand charge transfer from pi-electron accepting ligands. This interaction between the metal center and the ligand leads to a large continuum of orbitals within both the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO) which allows for excited electrons to switch multiplicities via intersystem crossing. [ 20 ]
While many organometallic photosensitizer compounds are made synthetically, there also exists naturally occurring, light-harvesting organometallic photosensitizers as well. Some relevant naturally occurring examples of organometallic photosensitizers include Chlorophyll A and Chlorophyll B . [ 20 ] [ 21 ]
Organic photosensitizers are carbon-based molecules which are capable of photosensitizing. The earliest studied photosensitizers were aromatic hydrocarbons which absorbed light in the presence of oxygen to produce reactive oxygen species. [ 22 ] These organic photosensitizers are made up of highly conjugated systems which promote electron delocalization . Due to their high conjugation, these systems have a smaller gap between the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO) as well as a continuum of orbitals within the HOMO and LUMO. The smaller band gap and the continuum of orbitals in both the conduction band and the valence band allow for these materials to enter their triplet state more efficiently, making them better photosensitizers. Some notable organic photosensitizers which have been studied extensively include benzophenones, methylene blue, rose Bengal, flavins, pterins [ 23 ] and others. [ 24 ]
A wide variety of nanomaterials function as photosensitizers.
Monatomic gaseous mercury (considered as the smallest possible cluster compound ) is a photosensitizer catalyzing radical dehydrogenation. [ 25 ]
Colloidal quantum dots are nanoscale semiconductor materials with highly tunable optical and electronic properties. Quantum dots photosensitize via the same mechanism as organometallic photosensitizers and organic photosensitizers, but their nanoscale properties allow for greater control in distinctive aspects. Some key advantages to the use of quantum dots as photosensitizers includes their small, tunable band gap which allows for efficient transitions to the triplet state, and their insolubility in many solvents which allows for easy retrieval from a synthetic reaction mixture. [ 18 ]
Nanorods , similar in size to quantum dots, have tunable optical and electronic properties. Based on their size and material composition, it is possible to tune the maximum absorption peak for nanorods during their synthesis. This control has led to the creation of photosensitizing nanorods. [ 26 ]
Photodynamic therapy utilizes Type II photosensitizers to harvest light to degrade tumors or cancerous masses. This discovery was first observed back in 1907 by Hermann von Tappeiner when he utilized eosin to treat skin tumors. [ 11 ] The photodynamic process is predominantly a noninvasive technique wherein the photosensitizers are put inside a patient so that it may accumulate on the tumor or cancer. When the photosensitizer reaches the tumor or cancer, wavelength specific light is shined on the outside of the patient's affected area. This light (preferably near infrared frequency as this allows for the penetration of the skin without acute toxicity) excites the photosensitizer's electrons into the triplet state. Upon excitation, the photosensitizer begins transferring energy to neighboring ground state triplet oxygen to generate excited singlet oxygen . The resulting excited oxygen species then selectively degrades the tumor or cancerous mass. [ 26 ] [ 27 ] [ 17 ]
In February 2019, medical scientists announced that iridium attached to albumin , creating a photosensitized molecule, can penetrate cancer cells and, after being irradiated with light (a process called photodynamic therapy ), destroy the cancer cells. [ 28 ] [ 29 ]
In 1972, scientists discovered that chlorophyll could absorb sunlight and transfer energy into electrochemical cells. [ 30 ] This discovery eventually led to the use of photosensitizers as sunlight-harvesting materials in solar cells, mainly through the use of photosensitizer dyes. Dye Sensitized Solar cells utilize these photosensitizer dyes to absorb photons from solar light and transfer energy rich electrons to the neighboring semiconductor material to generate electric energy output. These dyes act as dopants to semiconductor surfaces which allows for the transfer of light energy from the photosensitizer to electronic energy within the semiconductor. These photosensitizers are not limited to dyes. They may take the form of any photosensitizing structure, dependent on the semiconductor material to which they are attached. [ 16 ] [ 14 ] [ 31 ] [ 32 ]
Via the absorption of light, photosensitizers can utilize triplet state transfer to reduce small molecules, such as water, to generate Hydrogen gas. As of right now, photosensitizers have generated hydrogen gas by splitting water molecules at a small, laboratory scale. [ 33 ] [ 34 ]
In the early 20th century, chemists observed that various aromatic hydrocarbons in the presence of oxygen could absorb wavelength specific light to generate a peroxide species. [ 12 ] This discovery of oxygen's reduction by a photosensitizer led to chemists studying photosensitizers as photoredox catalysts for their roles in the catalysis of pericyclic reactions and other reduction and oxidation reactions. Photosensitizers in synthetic chemistry allow for the manipulation of electronic transitions within molecules through an externally applied light source. These photosensitizers used in redox chemistry may be organic, organometallic, or nanomaterials depending on the physical and spectral properties required for the reaction. [ 16 ] [ 24 ]
Photosensitizers that are readily incorporated into the external tissues can increase the rate at which reactive oxygen species are generated upon exposure to UV light (such as UV-containing sunlight). Some photosensitizing agents, such as St. John's Wort, appear to increase the incidence of inflammatory skin conditions in animals and have been observed to slightly reduce the minimum tanning dose in humans. [ 35 ] [ 36 ]
Some examples of photosensitizing medications (both investigatory and approved for human use) are: | https://en.wikipedia.org/wiki/Photosensitizer |
The Photostat machine , or Photostat , was an early projection photocopier created in the decade of the 1900s by the Commercial Camera Company, which became the Photostat Corporation. The "Photostat" name, which was originally a trademark of the company, became genericized , and was often used to refer to similar machines produced by the RetinalGraph Company or to any copy made by any such machine .
The growth of business during the Industrial Revolution created the need for a more efficient means of transcription than hand copying. Carbon paper was first used in the early 19th century. By the late 1840s copying presses were used to copy outgoing correspondence. One by one, other methods appeared. These included the "manifold writer", developed from Christoph Scheiner 's pantograph and used by Mark Twain ; copying baths; copying books; and roller copiers. Among the most significant of them was the Blue process in the early 1870s, which was mainly used to make blueprints of architectural and engineering drawings. Stencil duplicators (more commonly known as "Mimeograph machines") surfaced in 1874, and the Cyclostyle in 1891. All were manual and most involved messy fluids.
George C. Beidler of Oklahoma City founded the RetinalGraph Company in 1906 or 1907, producing the first photographic copying machines; he later moved the company to Rochester, New York in 1909 to be closer to the Haloid Company , his main source of photographic paper and chemicals.
The RetinalGraph Company was acquired by the Haloid Company in 1935. In 1948 Haloid purchased the rights to produce Chester Carlson 's xerographic equipment and in 1958 the firm was reorganized to Haloid Xerox, Inc., which in 1961 was renamed Xerox Corporation. [ 1 ] Haloid continued selling RetinalGraph machines into the 1960s.
The Photostat brand machine, differing in operation from the RetinalGraph but with the same purpose of the photographic copying of documents, was invented in Kansas City by Oscar T. Gregory in 1907. A directory of the city from 1909 shows his "Gregory Commercial Camera Company". By 1910, Gregory had co-filed a patent application with Norman W. Carkhuff, of the photography department of the United States Geological Survey , for a specific type of photographic camera, for quickly and easily photographing small objects, with a further object "to provide a camera of the type known as 'copying cameras' that will be simple and convenient [...]" [ 2 ] In 1911, the Commercial Camera Company of Providence, Rhode Island , was formed. By 1912, Photostat brand machines were in use, as evidenced by a record of one at the New York Public Library. By 1913, advertisements described the Commercial Camera Company as headquartered at Rochester and with a licensing and manufacturing relationship with Eastman Kodak . [ 3 ] The pair filed another U.S. patent application in 1913 further developing their ideas. [ 4 ] By 1920, distribution agency in various European markets was by the Alfred Herbert companies . [ 5 ] The Commercial Camera Company apparently became the Photostat Corporation around 1921, for "Commercial Camera Company" is described as a former name of Photostat Corporation in a 1922 issue of Patent and Trade Mark Review . [ 6 ] For at least 40 years the brand was widespread enough that its name was genericized by the public.
The Photostat Corporation was eventually absorbed by Itek in 1963.
Both RetinalGraph and Photostat machines consisted of a large camera that photographed documents or papers and exposed an image directly onto rolls of sensitized photographic paper that were about 350 feet (110 m) long. A prism was placed in front of the lens to reverse the image. After a 10-second exposure, the paper was directed to developing and fixing baths, then either air- or machine-dried. Since the print was directly exposed, without the use of an intermediate film, the result was a negative print. A typical typewritten document would appear on the photostat print with a black background and white letters. Thanks to the prism, the text would remain legible. Producing photostats took about two minutes in total. The result could, in turn, be photostated again to make any number of positive prints.
The photographic prints produced by such machines are commonly referred to as "photostats" or "photostatic copies". The verbs "photostat", "photostatted", and "photostatting" refer to making copies on such a machine in the same way that the trademarked name " Xerox " was later used to refer to any copy made by means of electrostatic photocopying . People who operated these machines were known as photostat operators.
It was the expense and inconvenience of photostats that drove Chester Carlson to study electrophotography. In the mid-1940s Carlson sold the rights to his invention – which became known as xerography – to the Haloid Company and photostatting soon sank into history.
Notes
Bibliography | https://en.wikipedia.org/wiki/Photostat_machine |
The photostationary state of a reversible photochemical reaction is the equilibrium chemical composition under a specific kind of electromagnetic irradiation (usually a single wavelength of visible or UV radiation ). [ 1 ]
It is a property of particular importance in photochromic compounds, often used as a measure of their practical efficiency and usually quoted as a ratio or percentage.
The position of the photostationary state is primarily a function of the irradiation parameters, the absorbance spectra of the chemical species, and the quantum yields of the reactions. The photostationary state can be very different from the composition of a mixture at thermodynamic equilibrium. As a consequence, photochemistry can be used to produce compositions that are "contra-thermodynamic".
For instance, although cis -stilbene is "uphill" from trans- stilbene in a thermodynamic sense, irradiation of trans -stilbene results in a mixture that is predominantly the cis isomer . [ 2 ] As an extreme example, irradiation of benzene at 237 to 254 nm results in formation of benzvalene , an isomer of benzene that is 71 kcal/mol higher in energy than benzene itself. [ 3 ] [ 4 ]
Absorption of radiation by reactants of a reaction at equilibrium increases the rate of forward reaction without directly affecting the rate of the reverse reaction. [ 5 ]
The rate of a photochemical reaction is proportional to the absorption cross section of the reactant with respect to the excitation source (σ), the quantum yield of reaction (Φ), and the intensity of the irradiation. In a reversible photochemical reaction between compounds A and B, there will therefore be a "forwards" reaction of A → B {\textstyle A\rightarrow B} at a rate proportional to σ a × ϕ A → B {\textstyle \sigma _{a}\times \phi _{A\rightarrow B}} and a "backwards" reaction of B → A {\textstyle B\rightarrow A} at a rate proportional to σ b × ϕ B → A {\textstyle \sigma _{b}\times \phi _{B\rightarrow A}} . The ratio of the rates of the forward and backwards reactions determines where the equilibrium lies, and thus the photostationary state is found at:
σ a × ϕ A → B / σ b × ϕ B → A {\displaystyle \sigma _{a}\times \phi _{A\rightarrow B}/\sigma _{b}\times \phi _{B\rightarrow A}}
If (as is always the case to some extent) the compounds A and B have different absorption spectra , then there may exist wavelengths of light where σ a is high and σ b is low. Irradiation at these wavelengths will provide photostationary states that contain mostly B. Likewise, wavelengths that give photostationary states of predominantly A may exist. This is particularly likely in compounds such as some photochromics, where A and B have entirely different absorption bands . Compounds that may be readily switched in this way find utility in devices such as molecular switches and optical data storage . | https://en.wikipedia.org/wiki/Photostationary_state |
Photostimulation is the use of light to artificially activate biological compounds, cells , tissues , or even whole organisms . Photostimulation can be used to noninvasively probe various relationships between different biological processes, using only light. In the long run, photostimulation has the potential for use in different types of therapy, such as migraine headache . Additionally, photostimulation may be used for the mapping of neuronal connections between different areas of the brain by “uncaging” signaling biomolecules with light. [ 1 ] Therapy with photostimulation has been called light therapy , phototherapy, or photobiomodulation.
Photostimulation methods fall into two general categories: one set of methods uses light to uncage a compound that then becomes biochemically active, binding to a downstream effector. For example, uncaging glutamate is useful for finding excitatory connections between neurons, since the uncaged glutamate mimics the natural synaptic activity of one neuron impinging upon another. The other major photostimulation method is the use of light to activate a light-sensitive protein such as rhodopsin , which can then excite the cell expressing the opsin.
Scientists have long postulated the need to control one type of cell while leaving those surrounding it untouched and unstimulated. Well-known scientific advancements such as the use of electrical stimuli and electrodes have succeeded in neural activation but fail to achieve the aforementioned goal because of their imprecision and inability to distinguish between different cell types. [ 2 ] The use of optogenetics (artificial cell activation via the use of light stimuli) is unique in its ability to deliver light pulses in a precise and timely fashion. Optogenetics is somewhat bidirectional in its ability to control neurons. Channels can be either depolarized or hyperpolarized depending on the wavelength of light that targets them. [ 3 ] For instance, the technique can be applied to channelrhodopsin cation channels to initiate neuronal depolarization and eventually activation upon illumination. Conversely, activity inhibition of a neuron can be triggered via the use of optogenetics as in the case of the chloride pump halorhodopsin which functions to hyperpolarize neurons. [ 3 ]
Before optogenetics can be performed, however, the subject at hand must express the targeted channels. Natural and abundant in microbials, rhodopsins—including bacteriorhodopsin, halorhodopsin and channelrhodopsin—each have a different characteristic action spectrum which describes the set of colors and wavelengths that they respond to and are driven to function by. [ 4 ]
It has been shown that channelrhodopsin-2 , a monolithic protein containing a light sensor and a cation channel, provides electrical stimulation of appropriate speed and magnitude to activate neuronal spike firing. Recently, photoinhibition , the inhibition of neural activity with light, has become feasible with the application of molecules such as the light-activated chloride pump halorhodopsin to neural control. Together, blue-light activated channelrhodopsin-2 and the yellow light-activated chloride pump halorhodopsin enable multiple-color, optical activation and silencing of neural activity. (See also Photobiomodulation )
A caged protein is a protein that is activated in the presence of a stimulating light source. In most cases, photo-uncaging is the technique revealing the active region of a compound by the process of photolysis of the shielding molecule (‘cage’). However, uncaging the protein requires an appropriate wavelength, intensity, and timing of the light . Achieving this is possible due to the fact that the optical fiber may be modified to deliver specific amounts of light. In addition, short bursts of stimulation allow results similar to the physiological norm. The steps of photostimulation are time independent in that protein delivery and light activation can be done at different times. This is because the two steps are dependent on each other for activation of the protein. [ 5 ]
Some proteins are innately photosensitive and function in the presence of light. Proteins known as opsins form the crux of the photosensitive proteins. These proteins are often found in the eye. In addition, many of these proteins function as ion channels and receptors . One example is when a certain wavelength of light is put onto certain channels, the blockage in the pore is relieved and allows ion transduction. [ 6 ]
To uncage molecules, a photolysis system is required to cleave the covalent bond . An example system can consist of a light source (generally a laser or a lamp), a controller for the amount of light that enters, a guide for the light, and a delivery system. Often, the design function in such a way that a medium is met between the diffusing light that may cause additional, unwanted photolysis and light attenuation; both being significant problems with a photolysis system. [ 5 ]
The idea of photostimulation as a method of controlling biomolecule function was developed in the 1970s. Two researchers, Walther Stoeckenius and Dieter Oesterhelt discovered an ion pump known as bacteriorhodopsin which functions in the presence of light in 1971. [ 7 ] In 1978, J.F. Hoffman invented the term “caging”. Unfortunately, this term caused some confusion among scientists due to the fact that the term is often used to describe a molecule which is trapped within another molecule. It could also be confused with the “caged effect” in the recombination of radicals. Therefore, some authors decided to use the term “light-activated” instead of “caging”. Both terms are currently in use. The first “caged molecule” synthesized by Hoffman et al. at Yale was the caged precursor to ATP derivative 1. [ 8 ]
Photostimulation is notable for its temporal precision, which may be used to obtain an accurate starting time of activation of caged effectors. In conjunction with caged inhibitors , the role of biomolecules at specific timepoints in an organism's lifecycle may be studied. A caged inhibitor of N-ethylmaleimide sensitive fusion protein (NSF), a key mediator of synaptic transmission, has been used to study the time dependency of NSF. [ 9 ] Several other studies have effected action potential firing through use of caged neurotransmitters such as glutamate. [ 10 ] [ 11 ] Caged neurotransmitters, including photolable precursors of glutamate , dopamine , serotonin , and GABA , are commercially available. [ 12 ]
Signaling during mitosis has been studied using reporter molecules with a caged fluorophore , which is not phosphorylated if photolysis has not occurred. [ 13 ] The advantage of this technique is that it provides a “snapshot” of kinase activity at specific timepoints rather than recording all activity since the reporter's introduction.
Calcium ions play an important signaling role, and controlling their release with caged channels has been extensively studied. [ 14 ] [ 15 ] [ 16 ]
Unfortunately, not all organisms produce or hold sufficient amounts of opsins. Thus, the opsin gene must be introduced to target neurons if they are not already present in the organism of study. The addition and expression of this gene is sufficient for the use of optogenetics. Possible means of achieving this include the construction of transgenic lines containing the gene or acute gene transfer to a specific area or region within an individual. These methods are known as germline transgenesis and somatic gene delivery, respectively. [ 17 ]
Optogenetics has shown significant promise in the treatment of a series of neurological disorders such as Parkinson's disease and epilepsy. Optogenetics has the potential to facilitate the manipulation and targeting of specific cell types or neural circuits, characteristics that are lacking in current brain stimulation techniques like DBS. At this point, the use of optogenetics in treating neural diseases has only been practically implemented in the field of neurobiology to reveal more about the mechanisms of specific disorders. Before the technique can be implemented to directly treat these disorders developments in other related fields such as gene therapy, opsin engineering, and optoelectronics must also make certain developments. [ 18 ] | https://en.wikipedia.org/wiki/Photostimulation |
Photosymbiosis is a type of symbiosis where one of the organisms is capable of photosynthesis . [ 1 ]
Examples of photosymbiotic relationships include those in lichens , plankton , ciliates , and many marine organisms including corals , fire corals , giant clams , and jellyfish . [ 2 ] [ 3 ] [ 4 ]
Photosymbiosis is important in the development, maintenance, and evolution of terrestrial and aquatic ecosystems , for example in biological soil crusts , soil formation , supporting highly diverse microbial populations in soil and water , and coral reef growth and maintenance. [ 5 ] [ 6 ]
When one organism lives within another symbiotically it's called endosymbiosis . Photosymbiotic relationships where microalgae and/or cyanobacteria live within a heterotrophic host organism, are believed to have led to eukaryotes acquiring photosynthesis and to the evolution of plants . [ 7 ] [ 8 ]
Lichens represent an association between one or more fungal mycobionts and one or more photosynthetic algal or cyanobacterial photobionts. The mycobiont provides protection from predation and desiccation , while the photobiont provides energy in the form of fixed carbon. Cyanobacterial partners are also capable of fixing nitrogen for the fungal partner. [ 9 ] Recent work suggests that non-photosynthetic bacterial microbiomes associated with lichens may also have functional significance to lichens. [ 10 ]
Most mycobiont partners derive from the ascomycetes , and the largest class of lichenized fungi is Lecanoromycetes . [ 11 ] The vast majority of lichens derive photobionts from Chlorophyta (green algae). [ 9 ] The co-evolutionary dynamics between mycobionts and photobionts are still unclear, as many photobionts are capable of free-living, and many lichenized fungi display traits adaptive to lichenization such as the capacity to withstand higher levels of reactive oxygen species (ROS), the conversion of sugars to polypols that help withstand dedication, and the downregulation of fungal virulence . However, it is still unclear whether these are derived or ancestral traits. [ 9 ]
Currently described photobiont species number about 100, far less than the 19,000 described species of fungal mycobionts, and factors such as geography can predominate over mycobiont preference. [ 12 ] [ 13 ] Phylogenetic analyses in lichenized fungi have suggested that, throughout evolutionary history, there has been repeated loss of photosymbionts, switching of photosymbionts, and independent lichenization events in previously unrelated fungal taxa. [ 11 ] [ 14 ] Loss of lichenization has likely led to the coexistence of non-lichenized fungi and lichenized fungi in lichens. [ 14 ]
Sponges (phylum Porifera) have a large diversity of photosymbiote associations. Photosymbiosis is found in four classes of Porifera ( Demospongiae , Hexactinellida , Homoscleromorpha , and Calcarea ), and known photosynthetic partners are cyanobacteria, chloroflexi , dinoflagellates , and red ( Rhodophyta ) and green (Chlorophyta) algae. Relatively little is known about the evolutionary history of sponge photosymbiois due to a lack of genomic data. [ 15 ] However, it has been shown that photosymbiotes are acquired vertically (transmission from parent to offspring) and/or horizontally (acquired from the environment). [ 16 ] Photosymbiotes can supply up to half of the host sponge's respiratory demands and can support sponges during times of nutrient stress. [ 17 ]
Members of certain classes in phylum Cnidaria are known for photosymbiotic partnerships. Members of corals (Class Anthozoa ) in the orders Hexacorallia and Octocorallia form well-characterized partnerships with the dinoflagellate genus Symbiodinium . Some jellyfish (class Scyphozoa ) in the genus Cassiopea (upside-down jellyfish) also possess Symbiodinium. Certain species in the genus Hydra (class Hydrozoa ) also harbor green algae and form a stable photosymbiosis. [ 15 ]
The evolution of photosymbiosis in corals was likely critical for the global establishment of coral reefs . [ 18 ] Corals are likewise adapted to eject damaged photosymbionts that generate high levels of toxic reactive oxygen species, a process known as bleaching . [ 19 ] The identity of the Symbiodinium photosymbiont can change in corals, although this depends largely on the mode of transmission: some species vertically transmit their algal partners through their eggs, [ 20 ] while other species acquire environmental dinoflagellates as newly-released eggs. [ 21 ] Since algae are not preserved in the coral fossil record, understanding the evolutionary history of the symbiosis is difficult. [ 22 ]
In basal bilaterians , photosymbiosis in marine or brackish systems is present only in the family Convolutidae . [ 23 ] In the group Acoela there is limited knowledge on the symbionts present, and they have been vaguely identified as zoochlorella or zooxanthella . [ 24 ] [ 25 ] Some species have a symbiotic relationship with the chlorophyte Tetraselmis convolutae while others have a symbiotic relationship with the dinoflagellates Symbiodinium , Amphidinium klebsii, or diatoms in the genus Licomorpha. [ 26 ] [ 27 ] [ 28 ] [ 29 ] [ 30 ] [ 31 ] [ 32 ] [ 33 ]
In freshwater systems, photosymbiosis is present in platyhelminths belonging to the Rhabdocoela group. [ 34 ] In this group, members of the Provorticidae , Dalyeliidae , and Typhloplanidae families are symbiotic. [ 35 ] Members of Provorticidae likely feed on diatoms and retain their symbionts. [ 36 ] Typhloplanidae have symbiotic relationships with the chlorophytes in the genus Chlorella . [ 37 ]
Photosymbiosis is taxonomically restricted in Mollusca . [ 38 ] Tropical marine bivalves in the Cardiidae family form a symbiotic relationship with the dinoflagellate Symbiodinium . [ 39 ] This family boasts large organisms often referred to as giant clams and their large size is attributed to the establishment of these symbiotic relationships. Additionally, the Symbiodinium are hosted extracellularly, which is relatively rare. [ 40 ] The only known freshwater bivalve with a symbiotic relationship are in the genus Anodonta which hosts the chlorophyte Chlorella in the gills and mantle of the host. [ 41 ] In bivalves, photosymbiosis is thought to have evolved twice, in the genus Anodonta and in the family Cardiidae. [ 42 ] However, how it has evolved in Cardiidae could have occurred through different gains or losses in the family. [ 43 ]
In gastropods , photosymbiosis can be found in several genera.
The species Strombus gigas hosts Symbiodinium which is acquired during the larval stage, at which point it is a mutualistic relationship. [ 44 ] However, during the adult stage, Symbiodinium becomes parasitic as the shell prevents photosynthesis. [ 45 ]
Another group of gastropods, heterobranch sea slugs, have two different systems for symbiosis. The first, Nudibranchia , acquire their symbionts through feeding on cnidarian prey that are in symbiotic relationships. [ 46 ] In Nudibranchs, photosymbiosis has evolved twice, in Melibe and Aeolidida . [ 47 ] In Aeolidida it is likely there have been several gains and losses of photosymbiosis as most genera include both photosymbiotic and non-photosymbiotic species. [ 48 ] The second, Sacoglossa , removes chloroplasts from macroalgae when feeding and sequesters them into their digestive tract at which point they are called kleptoplasts . [ 49 ] Whether these kleptoplasts maintain their photosynthetic capabilities depends on the host species ability to digest them properly. [ 50 ] In this group, functional kleptoplasy has been acquired twice, in Costasiellidae and Plakobranchacea . [ 51 ]
Photosymbiosis is relatively uncommon in chordate species. [ 52 ] One such example of photosymbiosis is in ascidians , the sea squirts. In the genus Didemnidae , 30 species establish symbiotic relationships. [ 53 ] The photosynthetic ascidians are associated with cyanobacteria in the genus of Prochloron as well as, in some cases, the species Synechocystis trididemni . [ 54 ] The 30 species with a symbiotic relationship span four genera where the congeners (species within the same genus) are primarily non-symbiotic, suggesting multiple origins of photosymbiosis in ascidians. [ 55 ]
In addition to sea squirts, embryos of some amphibian species ( Ambystoma maculatum , Ambystoma gracile , Ambystoma jeffersonium, Ambystoma trigrinum , Hynobius nigrescens , Lithobates sylvaticus , and Lithobates aurora ) form symbiotic relationships with the green alga in the genus of Oophila. [ 56 ] [ 57 ] [ 58 ] This algae is present in the egg masses of the species, causing them to appear green and providing oxygen and carbohydrates to the embryos. [ 59 ] Similarly, little is known about the evolution of symbiosis in amphibians, but there appears to be multiple origins.
Photosymbiosis has evolved multiple times in the protist taxa Ciliophora , Foraminifera , Radiolaria , Dinoflagellata , and diatoms . [ 60 ] Foraminifera and Radiolaria are planktonic taxa that serve as primary producers in open ocean communities. [ 61 ] Photosynthetic plankton species associate with the symbiotes of dinoflagellates, diatoms, rhodophytes , chlorophytes , and cyanophytes that can be transferred both vertically and horizontally . [ 62 ] In Foraminifera, benthic species will either have a symbiotic relationship with Symbiodinium or retain the chloroplasts present in algal prey species. [ 63 ] The planktonic species of Foraminifera associate primarily with Pelagodinium . [ 64 ] These species are often considered indicator species due to their bleaching in response to environmental stressors. [ 65 ] In the Radiolarian group Acantharia , photosynthetic species inhabit surface waters whereas non-photosynthetic species inhabit deeper waters. Photosynthetic Acantharia are associated with similar microalgae as the Foraminifera groups, but were also found to be associated with Phaeocystis , Heterocapsa , Scrippsiella, and Azadinium which were not previously known to be involved in photosynthetic relationships. [ 66 ] In addition, several of the species present in symbiotic relationships with Acantharia were oftentimes identical to the free-living species, suggesting horizontal transfer of symbiotes. [ 67 ] This provides insight into the evolutionary patterns responsible for these symbiotic relationships, suggesting that the selection for symbiosis is relatively weak and symbiosis is likely a result of the adaptive capacity of the host plankton species. | https://en.wikipedia.org/wiki/Photosymbiosis |
Photosynthate partitioning is the deferential distribution of photosynthates to plant tissues. A photosynthate is the resulting product of photosynthesis , these products are generally sugars. These sugars that are created from photosynthesis are broken down to create energy for use by the plant. Sugar and other compounds move via the phloem to tissues that have an energy demand. These areas of demand are called sinks. While areas with an excess of sugars and a low energy demand are called sources. Many times sinks are the actively growing tissues of the plant while the sources are where sugars are produced by photosynthesis—the leaves of plants. Sugars are actively loaded into the phloem and moved by a positive pressure flow created by solute concentrations and turgor pressure between xylem and phloem vessel elements (specialized plant cells). This movement of sugars is referred to as translocation . When sugars arrive at the sink they are unloaded for storage or broken down/metabolized. [ 1 ]
The partitioning of these sugars depends on multiple factors such as the vascular connections that exist, the location of the sink to source, the developmental stage, and the strength of that sink. Vascular connections exist between sources and sinks and those that are the most direct have been shown to receive more photosynthates than those that must travel through extensive connections. This also goes for proximity: those [ clarification needed ] closer to the source are easier to translocate sugars to. [ 2 ] Developmental stage plays a large role in partitioning, organs that are young such as meristems and new leaves have a higher demand, as well as those that are entering reproductive maturity—creating fruits, flowers, and seeds. [ 1 ] Many of these developing organs have a higher sink strength. Those with higher sink strengths receive more photosynthates than lower strength sinks. Sinks compete to receive these compounds and combination of factors playing in determining how much and how fast sinks receives photosynthates to grow and complete physiological activities.
This photosynthesis article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Photosynthate_partitioning |
Photosynthetic capacity ( A max ) is a measure of the maximum rate at which leaves are able to fix carbon during photosynthesis . It is typically measured as the amount of carbon dioxide that is fixed per metre squared per second, for example as μmol m −2 sec −1 .
Photosynthetic capacity is limited by carboxylation capacity and electron transport capacity. For example, in high carbon dioxide concentrations or in low light, the plant is not able to regenerate ribulose-1,5-bisphosphate fast enough (also known RUBP, the acceptor molecule in photosynthetic carbon reduction). So in this case, photosynthetic capacity is limited by electron transport of the light reaction, which generates the NADPH and ATP required for the PCR (Calvin) Cycle, and regeneration of RUBP. On the other hand, in low carbon dioxide concentrations, the capacity of the plant to perform carboxylation (adding carbon dioxide to Rubisco ) is limited by the amount of available carbon dioxide, with plenty of Rubisco left over.¹ Light response, or photosynthesis-irradiance, curves display these relationships.
Recent studies have shown that photosynthetic capacity in leaves can be increased with an increase in the number of stomata per leaf. This could be important in further crop development engineering to increase the photosynthetic efficiency through increasing diffusion of carbon dioxide into the plant.²
This photosynthesis article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Photosynthetic_capacity |
The photosynthetic efficiency (i.e. oxygenic photosynthesis efficiency ) is the fraction of light energy converted into chemical energy during photosynthesis in green plants and algae. Photosynthesis can be described by the simplified chemical reaction
where C 6 H 12 O 6 is glucose (which is subsequently transformed into other sugars , starches , cellulose , lignin , and so forth). The value of the photosynthetic efficiency is dependent on how light energy is defined – it depends on whether we count only the light that is absorbed, and on what kind of light is used (see Photosynthetically active radiation ). It takes eight (or perhaps ten or more [ 1 ] ) photons to use one molecule of CO 2 . The Gibbs free energy for converting a mole of CO 2 to glucose is 114 kcal , whereas eight moles of photons of wavelength 600 nm contains 381 kcal, giving a nominal efficiency of 30%. [ 2 ] However, photosynthesis can occur with light up to wavelength 720 nm so long as there is also light at wavelengths below 680 nm to keep Photosystem II operating (see Chlorophyll ). Using longer wavelengths means less light energy is needed for the same number of photons and therefore for the same amount of photosynthesis. For actual sunlight, where only 45% of the light is in the photosynthetically active spectrum, the theoretical maximum efficiency of solar energy conversion is approximately 11%. In actuality, however, plants do not absorb all incoming sunlight (due to reflection, respiration requirements of photosynthesis and the need for optimal solar radiation levels) and do not convert all harvested energy into biomass , which results in a maximum overall photosynthetic efficiency of 3 to 6% of total solar radiation. [ 1 ] If photosynthesis is inefficient, excess light energy must be dissipated to avoid damaging the photosynthetic apparatus. Energy can be dissipated as heat ( non-photochemical quenching ), or emitted as chlorophyll fluorescence .
Quoted values sunlight-to-biomass efficiency
0.2–2% [ 4 ] <1% [ 5 ]
The following is a breakdown of the energetics of the photosynthesis process from Photosynthesis by Hall and Rao: [ 6 ]
Starting with the solar spectrum falling on a leaf,
Stated another way:
Many plants lose much of the remaining energy on growing roots. Most crop plants store ~0.25% to 0.5% of the sunlight in the product (corn kernels, potato starch , etc.).
Photosynthesis increases linearly with light intensity at low intensity, but at higher intensity this is no longer the case (see Photosynthesis-irradiance curve ). Above about 10,000 lux or ~100 watts/square meter the rate no longer increases. Thus, most plants can only use ~10% of full mid-day sunlight intensity. [ 6 ] This dramatically reduces average achieved photosynthetic efficiency in fields compared to peak laboratory results. However, real plants (as opposed to laboratory test samples) have many redundant, randomly oriented leaves. This helps to keep the average illumination of each leaf well below the mid-day peak enabling the plant to achieve a result closer to the expected laboratory test results using limited illumination.
Only if the light intensity is above a plant specific value, called the compensation point the plant assimilates more carbon and releases more oxygen by photosynthesis than it consumes by cellular respiration for its own current energy demand. Photosynthesis measurement systems are not designed to directly measure the amount of light absorbed by the leaf. Nevertheless, the light response curves that the class produces do allow comparisons in photosynthetic efficiency between plants.
From a 2010 study by the University of Maryland , photosynthesizing cyanobacteria have been shown to be a significant contributor in the global carbon cycle , accounting for 20–30% of Earth's photosynthetic productivity and convert solar energy into biomass-stored chemical energy at the rate of ~450 TW. [ 7 ] Some pigments such as B-phycoerythrin that are mostly found in red algae and cyanobacteria has much higher light-harvesting efficiency compared to that of other plants. Such organisms are potentially candidates for biomimicry technology to improve solar panels design. [ 8 ]
Popular choices for plant biofuels include: oil palm , soybean , castor oil , sunflower oil, safflower oil, corn ethanol , and sugar cane ethanol.
A 2008 Hawaiian oil palm plantation projection stated: "algae could yield from 5,000-10,000 gallons of oil per acre yearly, compared to 250-350 gallons for jatropha and 600-800 gallons for palm oil ". That comes to 26 kW per acre or 7 W/m 2 . [ 9 ] Typical insolation in Hawaii is around 230 W/m 2 ., [ 10 ] so converting 3% of the incident solar energy to chemical fuel. Total photosynthetic efficiency would include more than just the biodiesel oil, so this number is a lower bound.
Contrast this with a typical photovoltaic installation, [ 11 ] which would produce an average of roughly 22 W/m 2 (roughly 10% of the average insolation), throughout the year. Furthermore, the photovoltaic panels would produce electricity, which is a high-quality form of energy , whereas converting the biodiesel into mechanical energy entails the loss of a large portion of the energy. On the other hand, a liquid fuel is much more convenient for a vehicle than electricity, which has to be stored in heavy, expensive batteries.
Most crop plants store ~0.25% to 0.5% of the sunlight in the product (corn kernels, potato starch , etc.) Ethanol fuel in Brazil has a calculation that results in: "Per hectare per year, the biomass produced corresponds to 0.27 TJ. This is equivalent to 0.86 W/m 2 . Assuming an average insolation of 225 W/m 2 , the photosynthetic efficiency of sugarcane is 0.38%." Sucrose accounts for little more than 30% of the chemical energy stored in the mature plant; 35% is in the leaves and stem tips, which are left in the fields during harvest, and 35% are in the fibrous material ( bagasse ) left over from pressing. [ 12 ] [ 13 ]
C3 plants use the Calvin cycle to fix carbon. C4 plants use a modified Calvin cycle in which they separate Ribulose-1,5-bisphosphate carboxylase oxygenase (RuBisCO) from atmospheric oxygen, fixing carbon in their mesophyll cells and using oxaloacetate and malate to ferry the fixed carbon to RuBisCO and the rest of the Calvin cycle enzymes isolated in the bundle-sheath cells. The intermediate compounds both contain four carbon atoms, which gives C4 . In Crassulacean acid metabolism (CAM), time isolates functioning RuBisCO (and the other Calvin cycle enzymes) from high oxygen concentrations produced by photosynthesis, in that O 2 is evolved during the day, and allowed to dissipate then, while at night atmospheric CO 2 is taken up and stored as malic or other acids. During the day, CAM plants close stomata and use stored acids as carbon sources for sugar, etc. production.
The C3 pathway requires 18 ATP and 12 NADPH for the synthesis of one molecule of glucose (3 ATP + 2 NADPH per CO 2 fixed) while the C4 pathway requires 30 ATP and 12 NADPH (C3 + 2 ATP per CO 2 fixed). In addition, we can take into account that each NADPH is equivalent to 3 ATP, that means both pathways require 36 additional (equivalent of) ATP [ 14 ] [ better source needed ] . Despite this reduced ATP efficiency, C4 is an evolutionary advancement, adapted to areas of high levels of light, where the reduced ATP efficiency is more than offset by the use of increased light. The ability to thrive despite restricted water availability maximizes the ability to use available light. The simpler C3 cycle which operates in most plants is adapted to wetter darker environments, such as many northern latitudes. [ citation needed ] Maize , sugar cane , and sorghum are C4 plants. These plants are economically important in part because of their relatively high photosynthetic efficiencies compared to many other crops. Pineapple is a CAM plant.
One efficiency-focused research topic is improving the efficiency of photorespiration . Around 25% of the time RuBisCO incorrectly collects oxygen molecules instead of CO 2 , creating CO 2 and ammonia that disrupt the photosynthesis process. Plants remove these byproducts via photorespiration, requiring energy and nutrients that would otherwise increase photosynthetic output. In C3 plants photorespiration can consume 20-50% of photosynthetic energy. [ 15 ]
The research shortened photosynthetic pathways in tobacco. Engineered crops grew taller and faster, yielding up to 40% more biomass. The study employed synthetic biology to construct new metabolic pathways and assessed their efficiency with and without transporter RNAi . The most efficient pathway increased light-use efficiency by 17%. [ 15 ]
Far-red
In efforts to increase photosynthetic efficiency, researchers have proposed extending the spectrum of light that is available for photosynthesis. One approach involves incorporating pigments like chlorophyll d and f , which are capable of absorbing far-red light, into the photosynthetic machinery of higher plants. [ 16 ] Naturally present in certain cyanobacteria, these chlorophylls enable photosynthesis with far-red light that standard chlorophylls a and b cannot utilize. By adapting these pigments for use in higher plants, it is hoped that plants can be engineered to utilize a wider range of the light spectrum, potentially leading to increased growth rates and biomass production. [ 17 ]
Green
Green light is considered the least efficient wavelength in the visible spectrum for photosynthesis and presents an opportunity for increased utilization. [ 18 ] Chlorophyll c is a pigment found in marine algae with blue-green absorption and could be used to expand absorption in the green wavelengths in plants. Expression of the dinoflagellate CHLOROPHYLL C SYNTHASE gene in the plant Nicotiana benthamiana resulted in the heterologous production of chlorophyll c . [ 19 ] This was the first successful introduction of a foreign chlorophyll molecule into a higher plant and is the first step towards bioengineering plants for improved photosynthetic performance across a variety of lighting conditions. [ 20 ]
Research is being done into RCB and NCP, two non-catalytic thioredoxin-like proteins that activate chloroplast transcription. [ 21 ] Knowing the exact mechanism can be useful to allow increasing photosynthesis (i.e. through genetic modification). [ 22 ]
Photosynthesis is the only process that allows the conversion of atmospheric carbon (CO2) to organic (solid) carbon, and this process plays an essential role in climate models. This lead researchers to study the sun-induced chlorophyll fluorescence (i.e., chlorophyll fluorescence that uses the Sun as illumination source; the glow of a plant) as an indicator of photosynthetic efficiency of a region. This is interesting for scientists since its shows them things like the CO2 absorption of a forests, or the productivity of an agricultural region. The FLEX (satellite) is the upcoming satellite program by the European Space Agency designated to this type of measurements. | https://en.wikipedia.org/wiki/Photosynthetic_efficiency |
Photosynthetic picoplankton or picophytoplankton is the fraction of the photosynthetic phytoplankton of cell sizes between 0.2 and 2 μm (i.e. picoplankton ). It is especially important in the central oligotrophic regions of the world oceans that have very low concentration of nutrients .
Because of its very small size, picoplankton is difficult to study by classic methods such as optical microscopy. More sophisticated methods are needed.
Three major groups of organisms constitute photosynthetic picoplankton:
The use of molecular approaches implemented since the 1990s for bacteria, were applied to the photosynthetic picoeukaryotes only 10 years later around 2000. They revealed a very wide diversity [ 10 ] [ 11 ] and brought to light the importance of the following groups in the picoplankton:
In temperate coastal environment, the genus Micromonas (Prasinophyceae) seems dominant. [ 14 ] However, in numerous oceanic environments, the dominant species of eukaryotic picoplankton remain still unknown. [ 20 ]
Each picoplanktonic population occupies a specific ecological niche in the oceanic environment.
Thirty years ago, it was hypothesized that the speed of division for micro-organisms in central oceanic ecosystems was very slow, of the order of one week or one month per generation. This hypothesis was supported by the fact that the biomass (estimated for example by the contents of chlorophyll ) was very stable over time. However, with the discovery of the picoplankton, it was found that the system was much more dynamic than previously thought. In particular, small predators of a size of a few micrometres which ingest picoplanktonic algae as quickly as they were produced were found to be ubiquitous. This extremely sophisticated predator-prey system is nearly always at equilibrium and results in a quasi-constant picoplankton biomass. This close equivalence between production and consumption makes it extremely difficult to measure precisely the speed at which the system turns over.
In 1988, two American researchers, Carpenter and Chang, suggested estimating the speed of cell division of phytoplankton by following the course of DNA replication by microscopy. By replacing the microscope by a flow cytometer , it is possible to follow the DNA content of picoplankton cells over time. This allowed researchers to establish that picoplankton cells are highly synchronous: they replicate their DNA and then divide all at the same time at the end of the day. This synchronization could be due to the presence of an internal biological clock .
In the 2000s, genomics allowed to cross a supplementary stage. Genomics consists in determining the complete sequence of genome of an organism and to list every gene present. It is then possible to get an idea of the metabolic capacities of the targeted organisms and understand how it adapts to its environment. To date, the genomes of several types of Prochlorococcus [ 21 ] [ 22 ] and Synechococcus , [ 23 ] and of a strain of Ostreococcus [ 24 ] have been determined. The complete genomes of two different Micromonas strains revealed that they were quite different (different species) and had similarities with land plants. [ 15 ] Several other cyanobacteria and of small eukaryotes ( Bathycoccus , Pelagomonas ) are under sequencing. In parallel, genome analyses begin to be done directly from oceanic samples (ecogenomics or metagenomics), [ 25 ] allowing us to access to large sets of gene for uncultivated organisms. | https://en.wikipedia.org/wiki/Photosynthetic_picoplankton |
A photosynthetic reaction center is a complex of several proteins , biological pigments , and other co-factors that together execute the primary energy conversion reactions of photosynthesis . Molecular excitations, either originating directly from sunlight or transferred as excitation energy via light-harvesting antenna systems , give rise to electron transfer reactions along the path of a series of protein-bound co-factors. These co-factors are light-absorbing molecules (also named chromophores or pigments) such as chlorophyll and pheophytin , as well as quinones . The energy of the photon is used to excite an electron of a pigment. The free energy created is then used, via a chain of nearby electron acceptors , for a transfer of hydrogen atoms (as protons and electrons) from H 2 O or hydrogen sulfide towards carbon dioxide, eventually producing glucose . These electron transfer steps ultimately result in the conversion of the energy of photons to chemical energy.
Reaction centers are present in all green plants , algae , and many bacteria . A variety in light-harvesting complexes exist across the photosynthetic species. Green plants and algae have two different types of reaction centers that are part of larger supercomplexes known as P700 in Photosystem I and P680 in Photosystem II . The structures of these supercomplexes are large, involving multiple light-harvesting complexes . The reaction center found in Rhodopseudomonas bacteria is currently best understood, since it was the first reaction center of known structure and has fewer polypeptide chains than the examples in green plants. [ 1 ]
A reaction center is laid out in such a way that it captures the energy of a photon using pigment molecules and turns it into a usable form. Once the light energy has been absorbed directly by the pigment molecules, or passed to them by resonance transfer from a surrounding light-harvesting complex , they release electrons into an electron transport chain and pass energy to a hydrogen donor such as H 2 O to extract electrons and protons from it. In green plants, the electron transport chain has many electron acceptors including pheophytin , quinone , plastoquinone , cytochrome bf , and ferredoxin , which result finally in the reduced molecule NADPH , while the energy used to split water results in the release of oxygen . The passage of the electron through the electron transport chain also results in the pumping of protons (hydrogen ions) from the chloroplast 's stroma and into the lumen , resulting in a proton gradient across the thylakoid membrane that can be used to synthesize ATP using the ATP synthase molecule. Both the ATP and NADPH are used in the Calvin cycle to fix carbon dioxide into triose sugars.
Two classes of reaction centres are recognized. Type I, found in green-sulfur bacteria , Heliobacteria , and plant/cyanobacterial PS-I, use iron sulfur clusters as electron acceptors. Type II, found in chloroflexus , purple bacteria , and plant/cyanobacterial PS-II, use quinones. Not only do all members inside each class share common ancestry, but the two classes also, by means of common structure, appear related. [ 2 ] [ 3 ]
Cyanobacteria, the precursor to chloroplasts found in green plants, have both photosystems with both types of reaction centers. Combining the two systems allows for producing oxygen. [ 3 ]
This section deals with the type II system found in purple bacteria. [ 3 ]
The bacterial photosynthetic reaction center has been an important model to understand the structure and chemistry of the biological process of capturing light energy. In the 1960s, Roderick Clayton was the first to purify the reaction center complex from purple bacteria. However, the first crystal structure (upper image at right) was determined in 1984 by Hartmut Michel , Johann Deisenhofer and Robert Huber [ 4 ] for which they shared the Nobel Prize in 1988. [ 5 ] This was also significant for being the first 3D crystal structure of any membrane protein complex.
Four different subunits were found to be important for the function of the photosynthetic reaction center. The L and M subunits , shown in blue and purple in the image of the structure, both span the lipid bilayer of the plasma membrane. They are structurally similar to one another, both having 5 transmembrane alpha helices . [ 6 ] Four bacteriochlorophyll b (BChl-b) molecules, two bacteriopheophytin b molecules (BPh) molecules, two quinones (Q A and Q B ), and a ferrous ion are associated with the L and M subunits. The H subunit, shown in gold, lies on the cytoplasmic side of the plasma membrane. A cytochrome subunit, not shown here, contains four c-type hemes and is located on the periplasmic surface (outer) of the membrane. The latter sub-unit is not a general structural motif in photosynthetic bacteria. The L and M subunits bind the functional and light-interacting cofactors, shown here in green.
Reaction centers from different bacterial species may contain slightly altered bacterio-chlorophyll and bacterio-pheophytin chromophores as functional co-factors. These alterations cause shifts in the colour of light that can be absorbed. The reaction center contains two pigments that serve to collect and transfer the energy from photon absorption: BChl and Bph. BChl roughly resembles the chlorophyll molecule found in green plants, but, due to minor structural differences, its peak absorption wavelength is shifted into the infrared , with wavelengths as long as 1000 nm. Bph has the same structure as BChl, but the central magnesium ion is replaced by two protons. This alteration causes both an absorbance maximum shift and a lowered redox potential.
The process starts when light is absorbed by two BChl molecules that lie near the periplasmic side of the membrane. This pair of chlorophyll molecules, often called the "special pair", absorbs photons at 870 nm or 960 nm, depending on the species and, thus, is called P870 (for Rhodobacter sphaeroides ) or P960 (for Blastochloris viridis ), with P standing for "pigment"). Once P absorbs a photon, it ejects an electron, which is transferred through another molecule of Bchl to the BPh in the L subunit. This initial charge separation yields a positive charge on P and a negative charge on the BPh. This process takes place in 10 picoseconds (10 −11 seconds). [ 1 ]
The charges on the P + and the BPh − could undergo charge recombination in this state, which would waste the energy and convert it into heat . Several factors of the reaction center structure serve to prevent this. First, the transfer of an electron from BPh − to P960 + is relatively slow compared to two other redox reactions in the reaction center. The faster reactions involve the transfer of an electron from BPh − (BPh − is oxidized to BPh) to the electron acceptor quinone (Q A ), and the transfer of an electron to P960 + (P960 + is reduced to P960) from a heme in the cytochrome subunit above the reaction center.
The high-energy electron that resides on the tightly bound quinone molecule Q A is transferred to an exchangeable quinone molecule Q B . This molecule is loosely associated with the protein and is fairly easy to detach. Two electrons are required to fully reduce Q B to QH 2 , taking up two protons from the cytoplasm in the process. The reduced quinone QH 2 diffuses through the membrane to another protein complex ( cytochrome bc 1 -complex ) where it is oxidized. In the process the reducing power of the QH 2 is used to pump protons across the membrane to the periplasmic space. The electrons from the cytochrome bc 1 -complex are then transferred through a soluble cytochrome c intermediate, called cytochrome c 2 , in the periplasm to the cytochrome subunit.
Cyanobacteria, the precursor to chloroplasts found in green plants, have both photosystems with both types of reaction centers. Combining the two systems allows for producing oxygen.
In 1772, the chemist Joseph Priestley carried out a series of experiments relating to the gases involved in respiration and combustion. In his first experiment, he lit a candle and placed it under an upturned jar. After a short period of time, the candle burned out. He carried out a similar experiment with a mouse in the confined space of the burning candle. He found that the mouse died a short time after the candle had been extinguished. However, he could revivify the foul air by placing green plants in the area and exposing them to light. Priestley's observations were some of the first experiments that demonstrated the activity of a photosynthetic reaction center.
In 1779, Jan Ingenhousz carried out more than 500 experiments spread out over 4 months in an attempt to understand what was really going on. He wrote up his discoveries in a book entitled Experiments upon Vegetables . Ingenhousz took green plants and immersed them in water inside a transparent tank. He observed many bubbles rising from the surface of the leaves whenever the plants were exposed to light. Ingenhousz collected the gas that was given off by the plants and performed several different tests in attempt to determine what the gas was. The test that finally revealed the identity of the gas was placing a smouldering taper into the gas sample and having it relight. This test proved it was oxygen, or, as Joseph Priestley had called it, 'de- phlogisticated air'.
In 1932, Robert Emerson and his student, William Arnold, used a repetitive flash technique to precisely measure small quantities of oxygen evolved by chlorophyll in the algae Chlorella . Their experiment proved the existence of a photosynthetic unit. Gaffron and Wohl later interpreted the experiment and realized that the light absorbed by the photosynthetic unit was transferred. [ 7 ] This reaction occurs at the reaction center of Photosystem II and takes place in cyanobacteria, algae and green plants. [ 8 ]
Photosystem II is the photosystem that generates the two electrons that will eventually reduce NADP + in ferredoxin-NADP-reductase. Photosystem II is present on the thylakoid membranes inside chloroplasts, the site of photosynthesis in green plants. [ 9 ] The structure of Photosystem II is remarkably similar to the bacterial reaction center, and it is theorized that they share a common ancestor.
The core of Photosystem II consists of two subunits referred to as D1 and D2 . These two subunits are similar to the L and M subunits present in the bacterial reaction center. Photosystem II differs from the bacterial reaction center in that it has many additional subunits that bind additional chlorophylls to increase efficiency. The overall reaction catalyzed by Photosystem II is:
Q represents the oxidized form of plastoquinone while QH 2 represents its reduced form. This process of reducing quinone is comparable to that which takes place in the bacterial reaction center. Photosystem II obtains electrons by oxidizing water in a process called photolysis . Molecular oxygen is a byproduct of this process, and it is this reaction that supplies the atmosphere with oxygen. The fact that the oxygen from green plants originated from water was first deduced by the Canadian-born American biochemist Martin David Kamen . He used a stable isotope of oxygen, 18 O, to trace the path of the oxygen from water to gaseous molecular oxygen. This reaction is catalyzed by a reactive center in Photosystem II containing four manganese ions .
The reaction begins with the excitation of a pair of chlorophyll molecules similar to those in the bacterial reaction center. Due to the presence of chlorophyll a , as opposed to bacteriochlorophyll , Photosystem II absorbs light at a shorter wavelength. The pair of chlorophyll molecules at the reaction center are often referred to as P680 . [ 1 ] When the photon has been absorbed, the resulting high-energy electron is transferred to a nearby pheophytin molecule. This is above and to the right of the pair on the diagram and is coloured grey. The electron travels from the pheophytin molecule through two plastoquinone molecules, the first tightly bound, the second loosely bound. The tightly bound molecule is shown above the pheophytin molecule and is colored red. The loosely bound molecule is to the left of this and is also colored red. This flow of electrons is similar to that of the bacterial reaction center. Two electrons are required to fully reduce the loosely bound plastoquinone molecule to QH 2 as well as the uptake of two protons.
The difference between Photosystem II and the bacterial reaction center is the source of the electron that neutralizes the pair of chlorophyll a molecules. In the bacterial reaction center, the electron is obtained from a reduced compound haem group in a cytochrome subunit or from a water-soluble cytochrome-c protein.
Every time the P680 absorbs a photon, it gives off an electron to pheophytin, gaining a positive charge. After this photoinduced charge separation , P680 + is a very strong oxidant of high energy. It passes its energy to water molecules that are bound at the manganese center directly below the pair and extracts an electron from them. This center, below and to the left of the pair in the diagram, contains four manganese ions, a calcium ion, a chloride ion, and a tyrosine residue. Manganese is adept at these reactions because it is capable of existing in four oxidation states: Mn 2+ , Mn 3+ , Mn 4+ and Mn 5+ . Manganese also forms strong bonds with oxygen-containing molecules such as water. The process of oxidizing two molecules of water to form an oxygen molecule requires four electrons. The water molecules that are oxidized in the manganese center are the source of the electrons that reduce the two molecules of Q to QH 2 . To date, this water splitting catalytic center has not been reproduced by any man-made catalyst.
After the electron has left Photosystem II it is transferred to a cytochrome b6f complex and then to plastocyanin , a blue copper protein and electron carrier. The plastocyanin complex carries the electron that will neutralize the pair in the next reaction center, Photosystem I .
As with Photosystem II and the bacterial reaction center, a pair of chlorophyll a molecules initiates photoinduced charge separation. This pair is referred to as P700 , where 700 is a reference to the wavelength at which the chlorophyll molecules absorb light maximally. The P700 lies in the center of the protein. Once photoinduced charge separation has been initiated, the electron travels down a pathway through a chlorophyll α molecule situated directly above the P700, through a quinone molecule situated directly above that, through three 4Fe-4S clusters, and finally to an interchangeable ferredoxin complex. [ 10 ] Ferredoxin is a soluble protein containing a 2Fe-2S cluster coordinated by four cysteine residues. The positive charge on the high-energy P700 + is neutralized by the transfer of an electron from plastocyanin , which receives energy eventually used to convert QH 2 back to Q. Thus the overall reaction catalyzed by Photosystem I is:
The cooperation between Photosystems I and II creates an electron and proton flow from H 2 O to NADP + , producing NADPH needed for glucose synthesis. This pathway is called the ' Z-scheme ' because the redox diagram from H 2 O to NADP + via P680 and P700 resembles the letter Z. [ 11 ] | https://en.wikipedia.org/wiki/Photosynthetic_reaction_centre |
In photosynthesis, state transitions are rearrangements of the photosynthetic apparatus which occur on short time-scales (seconds to minutes). The effect is prominent in cyanobacteria, whereby the phycobilisome light-harvesting antenna complexes alter their preference for transfer of excitation energy between the two reaction centers , PS I and PS II . [ 1 ] This shift helps to minimize photodamage caused by reactive oxygen species (ROS) under stressful conditions such as high light, but may also be used to offset imbalances between the rates of generating reductant and ATP .
The phenomenon was first discovered in unicellular green algae , [ 2 ] and may also occur in plants. [ 3 ] However, in these organisms it occurs by a different mechanism, which is not as well understood. The plant/algal mechanism is considered functionally analogous to the cyanobacterial mechanism but involves completely different components. The foremost difference is the presence of fundamentally different types of light-harvesting antenna complexes: plants and green algae use an intrinsically-bound membrane complex of chlorophyll a/b binding proteins for their antenna, instead of the soluble phycobilisome complexes used by cyanobacteria (and certain algae).
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Photosynthetic_state_transition |
Photosynthetically active radiation ( PAR ) designates the spectral range (wave band) of solar radiation from 400 to 700 nanometers that photosynthetic organisms are able to use in the process of photosynthesis . This spectral region corresponds more or less with the range of light visible to the human eye. Photons at shorter wavelengths tend to be so energetic that they can be damaging to cells and tissues, but are mostly filtered out by the ozone layer in the stratosphere . Photons at longer wavelengths do not carry enough energy to allow photosynthesis to take place.
Other living organisms, such as cyanobacteria , purple bacteria , and heliobacteria , can exploit solar light in slightly extended spectral regions, such as the near-infrared . These bacteria live in environments such as the bottom of stagnant ponds, sediment and ocean depths. Because of their pigments , they form colorful mats of green, red and purple.
Chlorophyll , the most abundant plant pigment, is most efficient in capturing red and blue light. Accessory pigments such as carotenes and xanthophylls harvest some green light and pass it on to the photosynthetic process, but enough of the green wavelengths are reflected to give leaves their characteristic color. An exception to the predominance of chlorophyll is autumn, when chlorophyll is degraded (because it contains N and Mg ) but the accessory pigments are not (because they only contain C , H and O ) and remain in the leaf producing red, yellow and orange leaves.
In land plants, leaves absorb mostly red and blue light in the first layer of photosynthetic cells because of chlorophyll absorbance. Green light, however, penetrates deeper into the leaf interior and can drive photosynthesis more efficiently than red light. [ 1 ] [ 2 ] Because green and yellow wavelengths can transmit through chlorophyll and the entire leaf itself, they play a crucial role in growth beneath the plant canopy. [ 3 ]
PAR measurement is used in agriculture, forestry and oceanography. One of the requirements for productive farmland is adequate PAR, so PAR is used to evaluate agricultural investment potential. PAR sensors stationed at various levels of the forest canopy measure the pattern of PAR availability and utilization. Photosynthetic rate and related parameters can be measured non-destructively using a photosynthesis system , and these instruments measure PAR and sometimes control PAR at set intensities. PAR measurements are also used to calculate the euphotic depth in the ocean.
In these contexts, the reason PAR is preferred over other lighting metrics such as luminous flux and illuminance is that these measures are based on human perception of brightness , which is strongly green biased and does not accurately describe the quantity of light usable for photosynthesis.
When measuring the irradiance of PAR, values are expressed using units of energy (W/m 2 ), which is relevant in energy-balance considerations for photosynthetic organisms . [ 4 ]
However, photosynthesis is a quantum process and the chemical reactions of photosynthesis are more dependent on the number of photons than the energy contained in the photons. Therefore, plant biologists often quantify PAR using the number of photons in the 400-700 nm range received by a surface for a specified amount of time, or the Photosynthetic Photon Flux Density (PPFD). [ 4 ] Values of PPFD are normally expressed using units of mol⋅m −2 ⋅s −1 . In relation to plant growth and morphology, it is better to characterise the light availability for plants by means of the Daily Light Integral (DLI), which is the daily flux of photons per ground area, and includes both diurnal variation as well as variation in day length. [ 5 ]
PPFD used to sometimes be expressed using einstein units , i.e., μE⋅m −2 ⋅s −1 , [ 6 ] although this usage is nonstandard and is no longer used. [ 7 ]
There are two common measures of photosynthetically active radiation: photosynthetic photon flux (PPF) and yield photon flux (YPF). PPF values all photons from 400 to 700 nm equally, while YPF weights photons in the range from 360 to 760 nm based on a plant's photosynthetic response. [ 8 ]
PAR as described with PPF does not distinguish between different wavelengths between 400 and 700 nm, and assumes that wavelengths outside this range have zero photosynthetic action. If the exact spectrum of the light is known, the photosynthetic photon flux density (PPFD) values in μmol⋅s −1 ⋅m −2 ) can be modified by applying different weighting factors to different wavelengths. This results in a quantity called the yield photon flux (YPF). [ 8 ] The red curve in the graph shows that photons around 610 nm (orange-red) have the highest amount of photosynthesis per photon. However, because short-wavelength photons carry more energy per photon, the maximum amount of photosynthesis per incident unit of energy is at a longer wavelength, around 650 nm (deep red).
It has been noted that there is considerable misunderstanding over the effect of light quality on plant growth. Many manufacturers claim significantly increased plant growth due to light quality (high YPF). The YPF curve indicates that orange and red photons between 600 and 630 nm can result in 20 to 30% more photosynthesis than blue or cyan photons between 400 and 540 nm. [ 9 ] [ 10 ] But the YPF curve was developed from short-term measurements made on single leaves in low light. More recent longer-term studies with whole plants in higher light indicate that light quality may have a smaller effect on plant growth rate than light quantity. Blue light, while not delivering as many photons per joule, encourages leaf growth and affects other outcomes. [ 9 ] [ 11 ]
The conversion between energy-based PAR and photon-based PAR depends on the spectrum of the light source (see Photosynthetic efficiency ). The following table shows the conversion factors from watts for black-body spectra that are truncated to the range 400–700 nm. It also shows the luminous efficacy for these light sources and the fraction of a real black-body radiator that is emitted as PAR.
For example, a light source of 1000 lm at a color temperature of 5800 K would emit approximately 1000/265 = 3.8 W of PAR, which is equivalent to 3.8 × 4.56 = 17.3 μmol/s. For a black-body light source at 5800 K, such as the sun is approximately, a fraction 0.368 of its total emitted radiation is emitted as PAR. For artificial light sources, that usually do not have a black-body spectrum, these conversion factors are only approximate.
The quantities in the table are calculated as
where B ( λ , T ) {\displaystyle B(\lambda ,T)} is the black-body spectrum according to Planck's law , y {\displaystyle y} is the standard luminosity function , λ 1 , λ 2 {\displaystyle \lambda _{1},\lambda _{2}} represent the wavelength range (400–700 nm) of PAR, and N A {\displaystyle N_{\text{A}}} is the Avogadro constant .
Besides the amount of radiation reaching a plant in the PAR region of the spectrum, it is also important to consider the quality of such radiation. Radiation reaching a plant contains entropy as well as energy, and combining those two concepts the exergy can be determined. This sort of analysis is known as exergy analysis or second law analysis, and the exergy represents a measure of the useful work, i.e., the useful part of radiation which can be transformed into other forms of energy.
The spectral distribution of the exergy of radiation is defined as: [ 12 ]
One of the advantages of working with the exergy is that it depends not only on the temperature of the emitter (the Sun), T {\displaystyle T} , but also on the temperature of the receiving body (the plant), T 0 {\displaystyle T_{0}} , i.e., it includes the fact that the plant is emitting radiation. Naming x = h c λ k T {\displaystyle x={\frac {hc}{\lambda kT}}} and y = h c λ k T 0 {\displaystyle y={\frac {hc}{\lambda kT_{0}}}} , the exergy emissive power of radiation in a region is determined as:
Where L i s ( z ) {\displaystyle \mathrm {Li} _{s}(z)} is a special function called the polylogarithm .
By definition, the exergy obtained by the receiving body is always lower than the energy radiated by the emitting blackbody, as a consequence of the entropy content in radiation.
Thus, as a consequence of the entropy content, not all the radiation reaching the Earth's surface is "useful" to produce work. Therefore, the efficiency of a process involving radiation should be measured against its exergy, not its energy.
Using the expression above, the optimal efficiency or second law efficiency for the conversion of radiation to work in the PAR region [ 13 ] (from λ 1 = {\displaystyle \lambda _{1}=} 400 nm to λ 2 = {\displaystyle \lambda _{2}=} 700 nm), for a blackbody at T {\displaystyle T} = 5800 K and an organism at T 0 {\displaystyle T_{0}} = 300 K is determined as:
about 8.3% lower than the value considered until now, as a direct consequence of the fact that the organisms which are using solar radiation are also emitting radiation as a consequence of their own temperature. Therefore, the conversion factor of the organism will be different depending on its temperature, and the exergy concept is more suitable than the energy one.
Researchers at Utah State University compared measurements for PPF and YPF using different types of equipment. They measured the PPF and YPF of seven common radiation sources with a spectroradiometer, then compared with measurements from six quantum sensors designed to measure PPF, and three quantum sensors designed to measure YPF.
They found that the PPF and YPF sensors were the least accurate for narrow-band sources (narrow spectrum of light) and most accurate for broad-band sources (fuller spectra of light). They found that PPF sensors were significantly more accurate under metal halide, low-pressure sodium and high-pressure sodium lamps than YPF sensors (>9% difference). Both YPF and PPF sensors were very inaccurate (>18% error) when used to measure light from red-light-emitting diodes. [ 8 ]
Photobiologically Active Radiation (PBAR) is a range of light energy beyond and including PAR. Photobiological Photon Flux (PBF) is the metric used to measure PBAR. | https://en.wikipedia.org/wiki/Photosynthetically_active_radiation |
Photosystems are functional and structural units of protein complexes involved in photosynthesis . Together they carry out the primary photochemistry of photosynthesis : the absorption of light and the transfer of energy and electrons . Photosystems are found in the thylakoid membranes of plants, algae, and cyanobacteria. These membranes are located inside the chloroplasts of plants and algae, and in the cytoplasmic membrane of photosynthetic bacteria. There are two kinds of photosystems: PSI and PSII.
PSII will absorb red light, and PSI will absorb far-red light. Although photosynthetic activity will be detected when the photosystems are exposed to either red or far-red light, the photosynthetic activity will be the greatest when plants are exposed to both wavelengths of light. Studies have actually demonstrated that the two wavelengths together have a synergistic effect on the photosynthetic activity, rather than an additive one. [ 1 ]
Each photosystem has two parts: a reaction center, where the photochemistry occurs, and an antenna complex , which surrounds the reaction center. The antenna complex contains hundreds of chlorophyll molecules which funnel the excitation energy to the center of the photosystem. At the reaction center, the energy will be trapped and transferred to produce a high energy molecule. [ 2 ]
The main function of PSII is to efficiently split water into oxygen molecules and protons. PSII will provide a steady stream of electrons to PSI, which will boost these in energy and transfer them to NADP + and H + to make NADPH . The hydrogen from this NADPH can then be used in a number of different processes within the plant. [ 2 ]
Reaction centers are multi-protein complexes found within the thylakoid membrane.
At the heart of a photosystem lies the reaction center , which is an enzyme that uses light to reduce and oxidize molecules (give off and take up electrons). This reaction center is surrounded by light-harvesting complexes that enhance the absorption of light.
In addition, surrounding the reaction center are pigments which will absorb light. The pigments which absorb light at the highest energy level are found furthest from the reaction center. On the other hand, the pigments with the lowest energy level are more closely associated with the reaction center. Energy will be efficiently transferred from the outer part of the antenna complex to the inner part. This funneling of energy is performed via resonance transfer, which occurs when energy from an excited molecule is transferred to a molecule in the ground state. This ground state molecule will be excited, and the process will continue between molecules all the way to the reaction center. At the reaction center, the electrons on the special chlorophyll molecule will be excited and ultimately transferred away by electron carriers. (If the electrons were not transferred away after excitation to a high energy state, they would lose energy by fluorescence back to the ground state, which would not allow plants to drive photosynthesis.) The reaction center will drive photosynthesis by taking light and turning it into chemical energy [ 3 ] that can then be used by the chloroplast. [ 2 ]
Two families of reaction centers in photosystems can be distinguished: type I reaction centers (such as photosystem I ( P700 ) in chloroplasts and in green-sulfur bacteria) and type II reaction centers (such as photosystem II ( P680 ) in chloroplasts and in non-sulfur purple bacteria). The two photosystems originated from a common ancestor, but have since diversified. [ 4 ] [ 5 ]
Each of the photosystem can be identified by the wavelength of light to which it is most reactive (700 nanometers for PSI and 680 nanometers for PSII in chloroplasts), the amount and type of light-harvesting complex present, and the type of terminal electron acceptor used.
Type I photosystems use ferredoxin -like iron-sulfur cluster proteins as terminal electron acceptors, while type II photosystems ultimately shuttle electrons to a quinone terminal electron acceptor. Both reaction center types are present in chloroplasts and cyanobacteria, and work together to form a unique photosynthetic chain able to extract electrons from water, creating oxygen as a byproduct.
A reaction center comprises several (about 25-30) [ 6 ] protein subunits, which provide a scaffold for a series of cofactors. The cofactors can be pigments (like chlorophyll , pheophytin , carotenoids ), quinones, or iron-sulfur clusters . [ 7 ]
Each photosystem has two main subunits: an antenna complex (a light harvesting complex or LHC) and a reaction center. The antenna complex is where light is captured, while the reaction center is where this light energy is transformed into chemical energy. At the reaction center, there are many polypeptides that are surrounded by pigment proteins. At the center of the reaction center is a special pair of chlorophyll molecules.
Each PSII has about 8 LHCII. These contain about 14 chlorophyll a and chlorophyll b molecules, as well as about four carotenoids . In the reaction center of PSII of plants and cyanobacteria, the light energy is used to split water into oxygen, protons, and electrons. The protons will be used in proton pumping to fuel the ATP synthase at the end of an electron transport chain . A majority of the reactions occur at the D1 and D2 subunits of PSII.
Both photosystem I and II are required for oxygenic photosynthesis. Oxygenic photosynthesis can be performed by plants and cyanobacteria; cyanobacteria are believed to be the progenitors of the photosystem-containing chloroplasts of eukaryotes . Photosynthetic bacteria that cannot produce oxygen have only one photosystem, which is similar to either PSI or PSII .
At the core of photosystem II is P680, a special chlorophyll to which incoming excitation energy from the antenna complex is funneled. One of the electrons of excited P680* will be transferred to a non- fluorescent molecule, which ionizes the chlorophyll and boosts its energy further, enough that it can split water in the oxygen evolving complex (OEC) of PSII and recover its electron. [ citation needed ] At the heart of the OEC are 4 Mn atoms, each of which can trap one electron. The electrons harvested from the splitting of two waters fill the OEC complex in its highest-energy state, which holds 4 excess electrons. [ 2 ]
Electrons travel through the cytochrome b6f complex to photosystem I via an electron transport chain within the thylakoid membrane . Energy from PSI drives this process [ citation needed ] and is harnessed (the whole process is termed chemiosmosis ) to pump protons across the membrane, into the thylakoid lumen space from the chloroplast stroma. This will provide a potential energy difference between lumen and stroma, which amounts to a proton-motive force that can be utilized by the proton-driven ATP synthase to generate ATP. If electrons only pass through once, the process is termed noncyclic photophosphorylation, but if they pass through PSI and the proton pump multiple times it is called cyclic photophosphorylation.
When the electron reaches photosystem I, it fills the electron deficit of light-excited reaction-center chlorophyll P700 + of PSI. The electron may either continue to go through cyclic electron transport around PSI or pass, via ferredoxin, to the enzyme NADP + reductase. Electrons and protons are added to NADP + to form NADPH.
This reducing (hydrogenation) agent is transported to the Calvin cycle to react with glycerate 3-phosphate , along with ATP to form glyceraldehyde 3-phosphate , the basic building block from which plants can make a variety of substances.
In intense light, plants use various mechanisms to prevent damage to their photosystems. They are able to release some light energy as heat, but the excess light can also produce reactive oxygen species . While some of these can be detoxified by antioxidants , the remaining oxygen species will be detrimental to the photosystems of the plant. More specifically, the D1 subunit in the reaction center of PSII can be damaged. Studies have found that deg1 proteins are involved in the degradation of these damaged D1 subunits. New D1 subunits can then replace these damaged D1 subunits in order to allow PSII to function properly again. [ 8 ] | https://en.wikipedia.org/wiki/Photosystem |
Photosystem I ( PSI , or plastocyanin–ferredoxin oxidoreductase ) is one of two photosystems in the photosynthetic light reactions of algae , plants , and cyanobacteria . Photosystem I [ 1 ] is an integral membrane protein complex that uses light energy to catalyze the transfer of electrons across the thylakoid membrane from plastocyanin to ferredoxin . Ultimately, the electrons that are transferred by Photosystem I are used to produce the moderate-energy hydrogen carrier NADPH . [ 2 ] The photon energy absorbed by Photosystem I also produces a proton-motive force that is used to generate ATP . PSI is composed of more than 110 cofactors , significantly more than Photosystem II . [ 3 ]
This photosystem is known as PSI because it was discovered before Photosystem II, although future experiments showed that Photosystem II is actually the first enzyme of the photosynthetic electron transport chain. Aspects of PSI were discovered in the 1950s, but the significance of these discoveries was not yet recognized at the time. [ 4 ] Louis Duysens first proposed the concepts of Photosystems I and II in 1960, and, in the same year, a proposal by Fay Bendall and Robert Hill assembled earlier discoveries into a coherent theory of serial photosynthetic reactions. [ 4 ] Hill and Bendall's hypothesis was later confirmed in experiments conducted in 1961 by the Duysens and Witt groups. [ 4 ]
Two main subunits of PSI, PsaA and PsaB, are closely related proteins involved in the binding of the vital electron transfer cofactors P 700 , Acc, A 0 , A 1 , and F x . PsaA and PsaB are both integral membrane proteins of 730 to 750 amino acids that contain 11 transmembrane segments. A [4Fe-4S] iron-sulfur cluster called F x is coordinated by four cysteines ; two cysteines are provided each by PsaA and PsaB. The two cysteines in each are proximal and located in a loop between the ninth and tenth transmembrane segments. A leucine zipper motif seems to be present [ 5 ] downstream of the cysteines and could contribute to dimerisation of PsaA/PsaB. The terminal electron acceptors F A and F B , also [4Fe-4S] iron-sulfur clusters, are located in a 9-kDa protein called PsaC that binds to the PsaA/PsaB core near F X . [ 6 ] [ 7 ]
Photoexcitation of the pigment molecules in the antenna complex induces electron and energy transfer. [ 10 ]
The antenna complex is composed of molecules of chlorophyll and carotenoids mounted on two proteins. [ 11 ] These pigment molecules transmit the resonance energy from photons when they become photoexcited. Antenna molecules can absorb all wavelengths of light within the visible spectrum . [ 12 ] The number of these pigment molecules varies from organism to organism. For instance, the cyanobacterium Synechococcus elongatus ( Thermosynechococcus elongatus ) has about 100 chlorophylls and 20 carotenoids, whereas spinach chloroplasts have around 200 chlorophylls and 50 carotenoids. [ 12 ] [ 3 ] Located within the antenna complex of PSI are molecules of chlorophyll called P700 reaction centers. The energy passed around by antenna molecules is directed to the reaction center. There may be as many as 120 or as few as 25 chlorophyll molecules per P700. [ 13 ]
The P700 reaction center is composed of modified chlorophyll a that best absorbs light at a wavelength of 700 nm . [ 14 ] P700 receives energy from antenna molecules and uses the energy from each photon to raise an electron to a higher energy level (P700*). These electrons are moved in pairs in an oxidation/reduction process from P700* to electron acceptors, leaving behind P700 + . The pair of P700* - P700 + has an electric potential of about −1.2 volts . The reaction center is made of two chlorophyll molecules and is therefore referred to as a dimer . [ 11 ] The dimer is thought to be composed of one chlorophyll a molecule and one chlorophyll a ′ molecule. However, if P700 forms a complex with other antenna molecules, it can no longer be a dimer. [ 13 ]
The two modified chlorophyll molecules are early electron acceptors in PSI. They are present one per PsaA/PsaB side, forming two branches electrons can take to reach F x . A 0 accepts electrons from P700*, passes it to A 1 of the same side, which then passes the electron to the quinone on the same side. Different species seems to have different preferences for either A/B branch. [ 15 ]
A phylloquinone , sometimes called vitamin K 1 , [ 16 ] is the next early electron acceptor in PSI. It oxidizes A 1 in order to receive the electron and in turn is re-oxidized by F x , from which the electron is passed to F b and F a . [ 16 ] [ 17 ] The reduction of F x appears to be the rate-limiting step. [ 15 ]
Three proteinaceous iron–sulfur reaction centers are found in PSI. Labeled F x , F a , and F b , they serve as electron relays. [ 18 ] F a and F b are bound to protein subunits of the PSI complex and F x is tied to the PSI complex. [ 18 ] Various experiments have shown some disparity between theories of iron–sulfur cofactor orientation and operation order. [ 18 ] In one model, F x passes an electron to F a , which passes it on to F b to reach the ferredoxin. [ 15 ]
Ferredoxin (Fd) is a soluble protein that facilitates reduction of NADP + to NADPH. [ 19 ] Fd moves to carry an electron either to a lone thylakoid or to an enzyme that reduces NADP + . [ 19 ] Thylakoid membranes have one binding site for each function of Fd. [ 19 ] The main function of Fd is to carry an electron from the iron-sulfur complex to the enzyme ferredoxin– NADP + reductase . [ 19 ]
This enzyme transfers the electron from reduced ferredoxin to NADP + to complete the reduction to NADPH. [ 20 ] FNR may also accept an electron from NADPH by binding to it. [ 20 ]
Plastocyanin is an electron carrier that transfers the electron from cytochrome b6f to the P700 cofactor of PSI in its ionized state P700 + . [ 10 ] [ 21 ]
The Ycf4 protein domain found on the thylakoid membrane is vital to photosystem I. This thylakoid transmembrane protein helps assemble the components of photosystem I. Without it, photosynthesis would be inefficient. [ 22 ]
Molecular data show that PSI likely evolved from the photosystems of green sulfur bacteria . The photosystems of green sulfur bacteria and those of cyanobacteria , algae , and higher plants are not the same, but there are many analogous functions and similar structures. Three main features are similar between the different photosystems. [ 23 ] First, redox potential is negative enough to reduce ferredoxin. [ 23 ] Next, the electron-accepting reaction centers include iron–sulfur proteins. [ 23 ] Last, redox centres in complexes of both photosystems are constructed upon a protein subunit dimer. [ 23 ] The photosystem of green sulfur bacteria even contains all of the same cofactors of the electron transport chain in PSI. [ 23 ] The number and degree of similarities between the two photosystems strongly indicates that PSI and the analogous photosystem of green sulfur bacteria evolved from a common ancestral photosystem. | https://en.wikipedia.org/wiki/Photosystem_I |
Photosystem II (or water-plastoquinone oxidoreductase ) is the first protein complex in the light-dependent reactions of oxygenic photosynthesis . It is located in the thylakoid membrane of plants , algae , and cyanobacteria . Within the photosystem, enzymes capture photons of light to energize electrons that are then transferred through a variety of coenzymes and cofactors to reduce plastoquinone to plastoquinol. The energized electrons are replaced by oxidizing water to form hydrogen ions and molecular oxygen.
By replenishing lost electrons with electrons from the splitting of water , photosystem II provides the electrons for all of photosynthesis to occur. The hydrogen ions (protons) generated by the oxidation of water help to create a proton gradient that is used by ATP synthase to generate ATP . The energized electrons transferred to plastoquinone are ultimately used to reduce NADP + to NADPH or are used in non-cyclic electron flow . [ 1 ] DCMU is a chemical often used in laboratory settings to inhibit photosynthesis. When present, DCMU inhibits electron flow from photosystem II to plastoquinone.
The core of PSII consists of a pseudo-symmetric heterodimer of two homologous proteins D1 and D2. [ 2 ] Unlike the reaction centers of all other photosystems in which the positive charge sitting on the chlorophyll dimer that undergoes the initial photoinduced charge separation is equally shared by the two monomers, in intact PSII the charge is mostly localized on one chlorophyll center (70−80%). [ 3 ] Because of this, P680 + is highly oxidizing and can take part in the splitting of water. [ 2 ]
Photosystem II (of cyanobacteria and green plants) is composed of around 20 subunits (depending on the organism) as well as other accessory, light-harvesting proteins. Each photosystem II contains at least 99 cofactors: 35 chlorophyll a, 12 beta-carotene , two pheophytin , two plastoquinone , two heme , one bicarbonate, 20 lipids, the Mn 4 CaO 5 cluster (including two chloride ions), one non heme Fe 2+ and two putative Ca 2+ ions per monomer. [ 4 ] There are several crystal structures of photosystem II. [ 5 ] The PDB accession codes for this protein are 3WU2 , 3BZ1 , 3BZ2 (3BZ1 and 3BZ2 are monomeric structures of the Photosystem II dimer), [ 4 ] 2AXT , 1S5L , 1W5C , 1ILX , 1FE1 , 1IZL .
The oxygen-evolving complex is the site of water oxidation. It is a metallo-oxo cluster comprising four manganese ions (in oxidation states ranging from +3 to +4) [ 6 ] and one divalent calcium ion. When it oxidizes water, producing oxygen gas and protons, it sequentially delivers the four electrons from water to a tyrosine (D1-Y161) sidechain and then to P680 itself. It is composed of three protein subunits, OEE1 (PsbO), OEE2 (PsbP) and OEE3 (PsbQ); a fourth PsbR peptide is associated nearby.
The first structural model of the oxygen-evolving complex was solved using X-ray crystallography from frozen protein crystals with a resolution of 3.8 Å in 2001. [ 7 ] Over the next years the resolution of the model was gradually increased to 2.9 Å . [ 8 ] [ 9 ] [ 10 ] While obtaining these structures was in itself a great feat, they did not show the oxygen-evolving complex in full detail. In 2011 the OEC of PSII was resolved to a level of 1.9Å revealing five oxygen atoms serving as oxo bridges linking the five metal atoms and four water molecules bound to the Mn 4 CaO 5 cluster; more than 1,300 water molecules were found in each photosystem II monomer, some forming extensive hydrogen-bonding networks that may serve as channels for protons, water or oxygen molecules. [ 11 ] At this stage, it is suggested that the structures obtained by X-ray crystallography are biased, since there is evidence that the manganese atoms are reduced by the high-intensity X-rays used, altering the observed OEC structure. This incentivized researchers to take their crystals to a different X-ray facilities, called X-ray Free Electron Lasers , such as SLAC in the USA. In 2014 the structure observed in 2011 was confirmed. [ 12 ] Knowing the structure of Photosystem II did not suffice to reveal how it works exactly. So now the race has started to solve the structure of Photosystem II at different stages in the mechanistic cycle (discussed below). Currently structures of the S1 state and the S3 state's have been published almost simultaneously from two different groups, showing the addition of an oxygen molecule designated O6 between Mn1 and Mn4, [ 13 ] [ 14 ] suggesting that this may be the site on the oxygen evolving complex, where oxygen is produced.
Photosynthetic water splitting (or oxygen evolution ) is one of the most important reactions on the planet, since it is the source of nearly all the atmosphere's oxygen. Moreover, artificial photosynthetic water-splitting may contribute to the effective use of sunlight as an alternative energy source.
The mechanism of water oxidation is understood in substantial detail. [ 15 ] [ 16 ] [ 17 ] The oxidation of water to molecular oxygen requires extraction of four electrons and four protons from two molecules of water. The experimental evidence that oxygen is released through cyclic reaction of oxygen evolving complex (OEC) within one PSII was provided by Pierre Joliot et al. [ 18 ] They have shown that, if dark-adapted photosynthetic material (higher plants, algae, and cyanobacteria) is exposed to a series of single turnover flashes, oxygen evolution is detected with typical period-four damped oscillation with maxima on the third and the seventh flash and with minima on the first and the fifth flash (for review, see [ 19 ] ). Based on this experiment, Bessel Kok and co-workers [ 20 ] introduced a cycle of five flash-induced transitions of the so-called S-states , describing the four redox states of OEC: When four oxidizing equivalents have been stored (at the S 4 -state), OEC returns to its basic S 0 -state. In the absence of light, the OEC will "relax" to the S 1 state; the S 1 state is often described as being "dark-stable". The S 1 state is largely considered to consist of manganese ions with oxidation states of Mn 3+ , Mn 3+ , Mn 4+ , Mn 4+ . [ 21 ] Finally, the intermediate S-states [ 22 ] were proposed by Jablonsky and Lazar as a regulatory mechanism and link between S-states and tyrosine Z.
In 2012, Renger expressed the idea of internal changes of water molecules into typical oxides in different S-states during water splitting. [ 23 ]
Inhibitors of PSII are used as herbicides. There are two main chemical families, the triazines derived from cyanuric chloride [ 24 ] of which atrazine and simazine are the most commonly used and the aryl ureas which include chlortoluron and diuron (DCMU). [ 25 ] [ 26 ] | https://en.wikipedia.org/wiki/Photosystem_II |
Photosystem II light-harvesting proteins are the intrinsic transmembrane proteins CP43 (PsbC) and CP47 (PsbB) occurring in the reaction centre of photosystem II (PSII). These polypeptides bind to chlorophyll a and β-Carotene and pass the excitation energy on to the reaction centre. [ 1 ] [ 2 ]
This family also includes the iron -stress induced chlorophyll-binding protein CP43', encoded by the IsiA gene, which evolved in cyanobacteria from a PSII protein to cope with light limitations and stress conditions. Under iron-deficient growth conditions, CP43' associates with photosystem I (PSI) to form a complex that consists of a ring of 18 or more CP43' molecules around a PSI trimer, which significantly increases the light-harvesting system of PSI. The IsiA protein can also provide photoprotection for PSII. [ 3 ]
Plants , algae and some bacteria use two photosystems , PSI with P700 and PSII with P680 . Using light energy, PSII acts first to channel an electron through a series of acceptors that drive a proton pump to generate adenosine triphosphate (ATP), before passing the electron on to PSI. Once the electron reaches PSI, it has used most of its energy in producing ATP, but a second photon of light captured by P700 provides the required energy to channel the electron to ferredoxin , generating reducing power in the form of NADPH . The ATP and NADPH produced by PSII and PSI, respectively, are used in the light-independent reactions for the formation of organic compounds. This process is non-cyclic, because the electron from PSII is lost and is only replenished through the oxidation of water . Hence, there is a constant flow of electrons and associated hydrogen atoms from water for the formation of organic compounds. It is this stripping of hydrogens from water that produces the oxygen we breathe. [ 4 ]
IsiA has an inverse relationship with the iron stress repressed RNA ( IsrR ). IsrR is an antisense RNA that acts as a reversible switch that responds to changes in environmental conditions to modulate the expression of IsiA . | https://en.wikipedia.org/wiki/Photosystem_II_light-harvesting_protein |
Phototaxis is a kind of taxis , or locomotory movement, that occurs when a whole organism moves towards or away from a stimulus of light . [ 2 ] This is advantageous for phototrophic organisms as they can orient themselves most efficiently to receive light for photosynthesis . Phototaxis is called positive if the movement is in the direction of increasing light intensity and negative if the direction is opposite. [ 3 ]
Phototaxis has been described in microorganisms and algea, insects and other invertebrates, and vertebrates. Typically nocturnal insects can show positive phototaxis, while nocturnal mammals often show negative phototaxis.
Phototaxis can be advantageous for phototrophic bacteria as they can orient themselves most efficiently to receive light for photosynthesis . Phototaxis is called positive if the movement is in the direction of increasing light intensity and negative if the direction is opposite. [ 3 ]
Two types of positive phototaxis are observed in prokaryotes (bacteria and archea ). The first is called "scotophobotaxis" (from the word " scotophobia "), which is observed only under a microscope. This occurs when a bacterium swims by chance out of the area illuminated by the microscope. Entering darkness signals the cell to reverse flagella rotation direction and reenter the light. The second type of phototaxis is true phototaxis, which is a directed movement up a gradient to an increasing amount of light. This is analogous to positive chemotaxis except that the attractant is light rather than a chemical.
Phototactic responses are observed in a number of bacteria and archae, such as Serratia marcescens . Photoreceptor proteins are light-sensitive proteins involved in the sensing and response to light in a variety of organisms. Some examples are bacteriorhodopsin and bacteriophytochromes in some bacteria. See also: phytochrome and phototropism .
Most prokaryotes (bacteria and archaea) are unable to sense the direction of light, because at such a small scale it is very difficult to make a detector that can distinguish a single light direction. Still, prokaryotes can measure light intensity and move in a light-intensity gradient. Some gliding filamentous prokaryotes can even sense light direction and make directed turns, but their phototactic movement is very slow. Some bacteria and archaea are phototactic. [ 4 ] [ 5 ] [ 1 ]
In most cases the mechanism of phototaxis is a biased random walk, analogous to bacterial chemotaxis. Halophilic archaea, such as Halobacterium salinarum , use sensory rhodopsins (SRs) for phototaxis. [ 6 ] [ 7 ] Rhodopsins are 7 transmembrane proteins that bind retinal as a chromophore . Light triggers the isomerization of retinal, [ 8 ] which leads to phototransductory signalling via a two-component phosphotransfer relay system. Halobacterium salinarum has two SRs, SRI and SRII, which signal via the transducer proteins Htr1 and Htr2 (halobacterial transducers for SRs I and II), respectively. [ 9 ] [ 10 ] The downstream signalling in phototactic archaebacteria involves CheA, a histidine kinase , which phosphorylates the response regulator, CheY. [ 11 ] Phosphorylated CheY induces swimming reversals. The two SRs in Halobacterium have different functions. SRI acts as an attractant receptor for orange light and, through a two-photon reaction, a repellent receptor for near-UV light, while SRII is a repellent receptor for blue light. Depending on which receptor is expressed, if a cell swims up or down a steep light gradient, the probability of flagellar switch will be low. If light intensity is constant or changes in the wrong direction, a switch in the direction of flagellar rotation will reorient the cell in a new, random direction. [ 12 ] As the length of the tracks is longer when the cell follows a light gradient, cells will eventually get closer to or further away from the light source. This strategy does not allow orientation along the light vector and only works if a steep light gradient is present (i.e. not in open water). [ 1 ]
Some cyanobacteria (e.g. Anabaena , Synechocystis ) can slowly orient along a light vector. This orientation occurs in filaments or colonies, but only on surfaces and not in suspension. [ 13 ] [ 14 ] The filamentous cyanobacterium Synechocystis is capable of both positive and negative two-dimensional phototactic orientation. The positive response is probably mediated by a bacteriophytochrome photoreceptor, TaxD1. This protein has two chromophore-binding GAF domains, which bind biliverdin chromophore, [ 15 ] and a C-terminal domain typical for bacterial taxis receptors ( MCP signal domain). TaxD1 also has two N-terminal transmembrane segments that anchor the protein to the membrane. [ 16 ] [ 17 ] [ 18 ] The photoreceptor and signalling domains are cytoplasmic and signal via a CheA/CheY-type signal transduction system to regulate motility by type IV pili. [ 19 ] TaxD1 is localized at the poles of the rod-shaped cells of Synechococcus elongatus , similarly to MCP containing chemosensory receptors in bacteria and archaea. [ 20 ] How the steering of the filaments is achieved is not known. The slow steering of these cyanobacterial filaments is the only light-direction sensing behaviour prokaryotes could evolve owing to the difficulty in detecting light direction at this small scale. [ 1 ]
The ability to link light perception to control of motility is found in a very wide variety of prokaryotes, indicating that this ability must confer a range of physiological advantages. [ 22 ] [ 23 ] Most directly, the light environment is crucial to phototrophs as their energy source. Phototrophic prokaryotes are extraordinarily diverse, with a likely role for horizontal gene transfer in spreading phototrophy across multiple phyla. [ 24 ] Thus, different groups of phototrophic prokaryotes may have little in common apart from their exploitation of light as an energy source, but it should be advantageous for any phototroph to be able to relocate in search of better light environments for photosynthesis. To do this efficiently requires the ability to control motility in response to integrated information on the intensity of light, the spectral quality of light and the physiological status of the cell. A second major reason for light-controlled motility is to avoid light at damaging intensities or wavelengths: this factor is not confined to photosynthetic bacteria since light (especially in the UV region) can be dangerous to all prokaryotes, primarily because of DNA and protein damage [ 25 ] and inhibition of the translation machinery by light-generated reactive oxygen species. [ 26 ] [ 21 ]
Finally, light signals potentially contain rich and complex information about the environment, and the possibility should not be excluded that bacteria make sophisticated use of this information to optimize their location and behavior. For example, plant or animal pathogens could use light information to control their location and interaction with their hosts, and in fact light signals are known to regulate development and virulence in several non-phototrophic prokaryotes. [ 27 ] [ 28 ] Phototrophs could also benefit from sophisticated information processing, since their optimal environment is defined by a complex combination of factors including light intensity, light quality, day and night cycles, the availability of raw materials and alternative energy sources, other beneficial or harmful physical and chemical factors and sometimes the presence of symbiotic partners. Light quality strongly influences specialized developmental pathways in certain filamentous cyanobacteria , including the development of motile hormogonia and nitrogen-fixing heterocysts . [ 29 ] Since hormogonia are important for establishing symbiotic partnerships between cyanobacteria and plants, and heterocysts are essential for nitrogen fixation in those partnerships, it is tempting to speculate that the cyanobacteria may be using light signals as one way to detect the proximity of a plant symbiotic partner. Within a complex and heterogeneous environment such as a phototrophic biofilm, many factors crucial for growth could vary dramatically even within the limited region that a single motile cell could explore. [ 30 ] [ 31 ] We should therefore expect that prokaryotes living in such environments might control their motility in response to a complex signal transduction network linking a range of environmental cues. [ 21 ]
The photophobic response is a change in the direction of motility in response to a relatively sudden increase in illumination: classically, the response is to a temporal change in light intensity, which the bacterium may experience as it moves into a brightly illuminated region. The directional switch may consist of a random selection of a new direction (‘tumbling’) or it may be a simple reversal in the direction of motility. Either has the effect of repelling cells from a patch of unfavorable light. Photophobic responses have been observed in prokaryotes as diverse as Escherichia coli , purple photosynthetic bacteria and haloarchaea . [ 32 ] [ 23 ] [ 21 ]
The scotophobic (fear of darkness) response is the converse of the photophobic response described above: a change in direction (tumbling or reversal) is induced when the cell experiences a relatively sudden drop in light intensity. Photophobic and scotophobic responses both cause cells to accumulate in regions of specific (presumably favorable) light intensity and spectral quality. Scotophobic responses have been well documented in purple photosynthetic bacteria, starting with the classic observations of Engelmann in 1883, [ 33 ] and in cyanobacteria. [ 22 ] Scotophobic/photophobic responses in flagellated bacteria closely resemble the classic ‘biased random walk’ mode of bacterial chemotaxis, which links perception of temporal changes in the concentration of a chemical attractant or repellent to the frequency of tumbling. [ 34 ] The only significant distinction is that the scotophobic/photophobic responses involve perception of temporal changes in light intensity rather than the concentration of a chemical. [ 21 ]
Photokinesis is a light-induced change in the speed (but not direction) of movement. Photokinesis may be negative (light-induced reduction of motility) or positive (light-induced stimulation of motility). Photokinesis can cause cells to accumulate in regions of favorable illumination: they linger in such regions or accelerate out of regions of unfavorable illumination. Photokinesis has been documented in cyanobacteria and purple photosynthetic bacteria. [ 22 ] [ 21 ]
True phototaxis consists of directional movement which may be either towards a light source (positive phototaxis) or away from a light source (negative phototaxis). In contrast to the photophobic/scotophobic responses, true phototaxis is not a response to a temporal change in light intensity. Generally, it seems to involve direct sensing of the direction of illumination rather than a spatial gradient of light intensity. True phototaxis in prokaryotes is sometimes combined with social motility, which involves the concerted movement of an entire colony of cells towards or away from the light source. This phenomenon could also be described as community phototaxis. True phototaxis is widespread in eukaryotic green algae , [ 35 ] but among the prokaryotes it has been documented only in cyanobacteria, [ 22 ] [ 17 ] and in social motility of colonies of the purple photosynthetic bacterium Rhodocista centenaria . [ 36 ] [ 21 ]
Some protists (unicellular eukaryotes) can also move toward or away from light, by coupling their locomotion strategy with a light-sensing organ. [ 38 ] Eukaryotes evolved for the first time in the history of life the ability to follow light direction in three dimensions in open water. The strategy of eukaryotic sensory integration, sensory processing and the speed and mechanics of tactic responses is fundamentally different from that found in prokaryotes. [ 39 ] [ 1 ]
Both single-celled and multi-cellular eukaryotic phototactic organisms have a fixed shape, are polarized, swim in a spiral and use cilia for swimming and phototactic steering. Signalling can happen via direct light-triggered ion currents , adenylyl cyclases or trimeric G-proteins . The photoreceptors used can also be very different (see below). However, signalling in all cases eventually modifies the beating activity of cilia. [ 1 ] The mechanics of phototactic orientation is analogous in all eukaryotes. A photosensor with a restricted view angle rotates to scan the space and signals periodically to the cilia to alter their beating, which will change the direction of the helical swimming trajectory. Three-dimensional phototaxis can be found in five out of the six eukaryotic major groups ( opisthokonts , Amoebozoa , plants , chromalveolates , excavates , rhizaria ). [ 1 ]
Pelagic phototaxis is present in green algae – it is not present in glaucophyte algae or red algae . [ 1 ] Green algae have a "stigma" located in the outermost portion of the chloroplast , directly underneath the two chloroplast membranes . The stigma is made of tens to several hundreds of lipid globules, which often form hexagonal arrays and can be arranged in one or more rows. The lipid globules contain a complex mixture of carotenoid pigments, which provide the screening function and the orange-red colour, [ 40 ] as well as proteins that stabilize the globules. [ 41 ] The stigma is located laterally, in a fixed plane relative to the cilia, but not directly adjacent to the basal bodies. [ 42 ] [ 43 ] The fixed position is ensured by the attachment of the chloroplast to one of the ciliary roots. [ 44 ] The pigmented stigma is not to be confused with the photoreceptor. The stigma only provides directional shading for the adjacent membrane-inserted photoreceptors (the term "eyespot" is therefore misleading). Stigmata can also reflect and focus light like a concave mirror, thereby enhancing sensitivity. [ 1 ]
In the best-studied green alga, Chlamydomonas reinhardtii , phototaxis is mediated by a rhodopsin pigment, as first demonstrated by the restoration of normal photobehaviour in a blind mutant by analogues of the retinal chromophore . [ 45 ] Two archaebacterial-type rhodopsins, channelrhodopsin -1 and -2, [ 46 ] [ 47 ] were identified as phototaxis receptors in Chlamydomonas . [ 48 ] Both proteins have an N-terminal 7-transmembrane portion, similar to archaebacterial rhodopsins, followed by an approximately 400 residue C-terminal membrane-associated portion. CSRA and CSRB act as light-gated cation channels and trigger depolarizing photocurrents. [ 48 ] [ 49 ] CSRA was shown to localize to the stigma region using immunofluorescence analysis (Suzuki et al. 2003). Individual RNAi depletion of both CSRA and CSRB modified the light-induced currents and revealed that CSRA mediates a fast, high-saturating current while CSRB a slow, low-saturating one. Both currents are able to trigger photophobic responses and can have a role in phototaxis, [ 50 ] [ 49 ] although the exact contribution of the two receptors is not yet clear. [ 1 ]
As in all bikonts (plants, chromalveolates, excavates, rhizaria), green algae have two cilia, which are not identical. The anterior cilium is always younger than the posterior one. [ 51 ] [ 52 ] In every cell cycle, one daughter cell receives the anterior cilium and transforms it into a posterior one. The other daughter inherits the posterior, mature cilium. Both daughters then grow a new anterior cilium. [ 1 ]
As all other ciliary swimmers, green algae always swim in a spiral. The handedness of the spiral is robust and is guaranteed by the chirality of the cilia. The two cilia of green algae have different beat patterns and functions. In Chlamydomonas, the phototransduction cascade alters the stroke pattern and beating speed of the two cilia differentially in a complex pattern. [ 53 ] [ 54 ] This results in the reorientation of the helical swimming trajectory as long as the helical swimming axis is not aligned with the light vector. [ 1 ]
Positive and negative phototaxis can be found in several species of jellyfish such as those from the genus Polyorchis . Jellyfish use ocelli to detect the presence and absence of light, which is then translated into anti-predatory behaviour in the case of a shadow being cast over the ocelli, or feeding behaviour in the case of the presence of light. [ 55 ] Many tropical jellyfish have a symbiotic relationship with photosynthetic zooxanthellae that they harbor within their cells. [ 56 ] The zooxanthellae nourish the jellyfish, while the jellyfish protects them, and moves them toward light sources such as the sun to maximize their light-exposure for efficient photosynthesis. In a shadow, the jellyfish can either remain still, or quickly move away in bursts to avoid predation and also re-adjust toward a new light source. [ 57 ]
This motor response to light and absence of light is facilitated by a chemical response from the ocelli, which results in a motor response causing the organism to swim toward a light source. [ 57 ]
Phototaxis has been well studied in the marine ragworm Platynereis dumerilii . Both Platynereis dumerilii trochophore and its metatrochophore larvae are positively phototactic. Phototaxis is mediated by simple eyespots that consists of a pigment cell and a photoreceptor cell . The photoreceptor cell synapses directly onto ciliated cells, which are used for swimming. The eyespots do not give spatial resolution, therefore the larvae are rotating to scan their environment for the direction where the light is coming from. [ 58 ]
Platynereis dumerilii larvae ( nectochaete ) can switch between positive and negative phototaxis. Phototaxis there is mediated by two pairs of more complex pigment cup eyes. These eyes contain more photoreceptor cells that are shaded by pigment cells forming a cup. The photoreceptor cells do not synapse directly onto ciliated cells or muscle cells but onto inter-neurons of a processing center. This way the information of all four eye cups can be compared and a low-resolution image of four pixels can be created telling the larvae where the light is coming from. This way the larva does not need to scan its environment by rotating. [ 59 ] This is an adaption for living on the bottom of the sea the lifestyle of the larva while scanning rotation is more suited for living in the open water column, the lifestyle of the trochophore larva. Phototaxis in the Platynereis dumerilii larva has a broad spectral range which is at least covered by three opsins that are expressed by the cup eyes: [ 60 ] Two rhabdomeric opsins [ 61 ] and a Go-opsin. [ 60 ]
However, not every behavior that looks like phototaxis is phototaxis: Platynereis dumerilii nechtochate and metatrochophore larvae swim up first when they are stimulated with UV-light from above. But after a while, they change the direction and avoid the UV-light by swimming down. This looks like a change from positive to negative phototaxis (see video left), but the larvae also swim down if UV-light comes non-directionally from the side. And so they do not swim to or away from the light, but swim down, [ 62 ] this means to the center of gravity. Thus this is a UV-induced positive gravitaxis . Positive phototaxis (swimming to the light from the surface) and positive gravitaxis (swimming to the center of gravity) are induced by different ranges of wavelengths and cancel out each other at a certain ratio of wavelengths. [ 62 ] Since the wavelengths compositions change in water with depth: Short (UV, violet) and long (red) wavelengths are lost first, [ 60 ] phototaxis and gravitaxis form a ratio-chromatic depth gauge , which allows the larvae to determine their depth by the color of the surrounding water. This has the advantage over a brightness based depth gauge that the color stays almost constant independent of the time of the day or whether it is cloudy. [ 63 ] [ 64 ]
In the diagram on the right, the larvae start swimming upwards when UV-light switched on (marked by the violet square). But later, they are swimming downward. The larval tracks are color coded: Red for upward and blue for downward swimming larvae. The video runs at double speed. [ 62 ]
Positive phototaxis can be found in many flying insects such as moths , grasshoppers , and flies . Drosophila melanogaster has been studied extensively for its innate positive phototactic response to light sources, using controlled experiments to help understand the connection between airborne locomotion toward a light source. [ 65 ] This innate response is common among insects that fly primarily during the night utilizing transverse orientation vis-à-vis the light of the moon for orientation. [ 66 ] Artificial lighting in cities and populated areas results in a more pronounced positive response compared to that with the distant light of the moon, resulting in the organism repeatedly responding to this new supernormal stimulus and innately flying toward it.
Evidence for the innate response of positive phototaxis in Drosophila melanogaster was carried out by altering the wings of several individual specimens, both physically (via removal) and genetically (via mutation). In both cases there was a noticeable lack of positive phototaxis, demonstrating that flying toward light sources is an innate response to the organisms' photoreceptors receiving a positive response. [ 65 ]
Negative phototaxis can be observed in larval drosophila melanogaster within the first three developmental instar stages, despite adult insects displaying positive phototaxis. [ 67 ] This behaviour is common among other species of insects which possess a flightless larval and adult stage in their life cycles, only switching to positive phototaxis when searching for pupation sites. Tenebrio molitor by comparison is one species which carries its negative phototaxis into adulthood. [ 67 ]
Under experimental conditions, organisms that use positive phototaxis have also shown a correlation with light and magnetic fields. Under homogeneous light conditions with a shifting magnetic field, Drosophila melanogaster larvae reorient themselves toward predicted directions of greater or lesser light intensities as expected by a rotating magnetic field. In complete darkness, the larvae orient randomly without any notable preference. [ 67 ] This suggests the larvae can observe a visible pattern in combination with light.
A depth can also be selected based on light levels: The brightness decreases with depth, but depends on the weather (e.g. whether it is sunny or cloudy) and the time of the day. Also the color depends on the water depth and dissolved and suspended matter. [ 63 ] [ 64 ] The only consistent factor is that at a given place, deeper water is darker.
In water, light attenuates differently for each wavelength . The UV , violet (> 420 nm), and red (< 500 nm) wavelengths disappear before blue light (470 nm), which penetrates clear water the deepest. [ 68 ] [ 60 ] The wavelength composition is constant for each depth and is almost independent of time of the day and the weather . To gauge depth, an animal would need two photopigments sensitive to different wavelengths to compare different ranges of the spectrum. [ 63 ] [ 64 ] Such pigments may be expressed in different structures.
Such different structures are found in the polychaete Torrea candida . Its eyes have a main and two accessory retinae . The accessory retinae sense UV-light ( λ max = 400 nm) and the main retina senses blue-green light ( λ max = 560 nm). If the light sensed from all retinae is compared, the depth can be estimated, and so for Torrea candida such a ratio-chromatic depth gauge has been proposed. [ 69 ]
A ratio chromatic depth gauge has been found in larvae of the polychaete Platynereis dumerilii . [ 62 ] The larvae have two structures: The rhabdomeric photoreceptor cells of the eyes [ 70 ] and in the deep brain the ciliary photoreceptor cells. The ciliary photoreceptor cells express a ciliary opsin , [ 71 ] which is a photopigment maximally sensitive to UV-light ( λ max = 383 nm). [ 72 ] Thus, the ciliary photoreceptor cells react on UV-light and make the larvae swimming down gravitactically. The gravitaxis here is countered by phototaxis , which makes the larvae swimming up to the light coming from the surface. [ 60 ] Phototaxis is mediated by the rhabdomeric eyes. [ 73 ] [ 62 ] The eyes express at least three opsins (at least in the older larvae), [ 61 ] and one of them is maximally sensitive to cyan light ( λ max = 483 nm) so that the eyes cover a broad wavelength range with phototaxis. [ 60 ] When phototaxis and gravitaxis have leveled out, the larvae have found their preferred depth. [ 62 ] | https://en.wikipedia.org/wiki/Phototaxis |
Phototendering is the process by which organic fibres and textiles lose strength and flexibility due to exposure to sunlight. The ultraviolet component of the sun's spectrum affects fibres, causing chain degradation and, hence, loss of strength. Colour fade is a common problem in phototendering.
The rate of deterioration is also affected by pigments and dyes present in the textiles. Pigments can also be affected, generally fading after UVA and UVB radiation exposure. Great care is needed to preserve museum artefacts from the harmful effects of UV light, which can also be present in fluorescent lamps , such as ancient textiles. Paintings such as watercolours need protection from sunlight to preserve the original colours.
Many synthetic polymers are also degraded by UV light, and polypropylene is especially susceptible. As a result, UV stabilisers are added to many thermoplastics . Carbon black is also effective in protecting products against UV degradation . | https://en.wikipedia.org/wiki/Phototendering |
Photothermal effect is a phenomenon associated with electromagnetic radiation . It is produced by the photoexcitation of material, resulting in the production of thermal energy (heat).
It is sometimes used during treatment of blood vessel lesions , laser resurfacing , laser hair removal and laser surgery .
This physical chemistry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Photothermal_effect |
Photothermal microspectroscopy ( PTMS ), alternatively known as photothermal temperature fluctuation ( PTTF ), [ 1 ] [ 2 ] is derived from two parent instrumental techniques: infrared spectroscopy and atomic force microscopy (AFM). In one particular type of AFM, known as scanning thermal microscopy (SThM), the imaging probe is a sub-miniature temperature sensor, which may be a thermocouple or a resistance thermometer. [ 3 ] This same type of detector is employed in a PTMS instrument, enabling it to provide AFM/SThM images: However, the chief additional use of PTMS is to yield infrared spectra from sample regions below a micrometer, as outlined below.
The AFM is interfaced with an infrared spectrometer. For work using Fourier transform infrared spectroscopy (FTIR), the spectrometer is equipped with a conventional black body infrared source. A particular region of the sample may first be chosen on the basis of the image obtained using the AFM imaging mode of operation. Then, when material at this location absorbs the electromagnetic radiation, heat is generated, which diffuses, giving rise to a decaying temperature profile. The thermal probe then detects the photothermal response of this region of the sample. The resultant measured temperature fluctuations provide an interferogram that replaces the interferogram obtained by a conventional FTIR setup, e.g., by direct detection of the radiation transmitted by a sample. The temperature profile can be made sharp by modulating the excitation beam. This results in the generation of thermal waves whose diffusion length is inversely proportional to the root of the modulation frequency. An important advantage of the thermal approach is that it permits to obtain depth-sensitive subsurface information from surface measurement, thanks to the dependence of thermal diffusion length on modulation frequency.
The two particular features of PTMS that have determined its applications so far are 1) spectroscopic mapping may be performed at a spatial resolution well below the diffraction limit of IR radiation, ultimately at a scale of 20-30 nm. In principle, this opens the way to sub-wavelength IR microscopy (see scanning probe microscopy ) where the image contrast is to be determined by the thermal response of individual sample regions to particular spectral wavelengths and 2) in general, no special preparation technique is required when solid samples are to be studied. For most standard FTIR methods, this is not the case.
This spectroscopic technique complements another recently developed method of chemical characterisation or fingerprinting, namely micro-thermal analysis (micro-TA). [ 4 ] [ 5 ] This also uses an “active” SThM probe, which acts as a heater as well as a thermometer, so as to inject evanescent temperature waves into a sample and to allow sub-surface imaging of polymers and other materials. The sub-surface detail detected corresponds to variations in heat capacity or thermal conductivity . Ramping the temperature of the probe, and thus the temperature of the small sample region in contact with it, allows localized thermal analysis and/or thermomechanometry to be performed. | https://en.wikipedia.org/wiki/Photothermal_microspectroscopy |
Photothermal optical microscopy / "photothermal single particle microscopy" is a technique that is based on detection of non- fluorescent labels. It relies on absorption properties of labels ( gold nanoparticles , semiconductor nanocrystals , etc.), and can be realized on a conventional microscope using a resonant modulated heating beam, non-resonant probe beam and lock-in detection of photothermal signals from a single nanoparticle . It is the extension of the macroscopic photothermal spectroscopy to the nanoscopic domain. The high sensitivity and selectivity of photothermal microscopy allows even the detection of single molecules by their absorption. Similar to Fluorescence Correlation Spectroscopy (FCS), the photothermal signal may be recorded with respect to time to study the diffusion and advection characteristics of absorbing nanoparticles in a solution. This technique is called photothermal correlation spectroscopy (PhoCS).
In this detection scheme a conventional scanning sample or laser-scanning transmission microscope is employed. Both the heating and the probing laser beam are coaxially aligned and
superimposed using a dichroic mirror . Both beams are focused onto a sample, typically via a high-NA illumination microscope objective, and recollected using a detection microscope objective. The thereby collimated transmitted beam is then imaged onto a photodiode after filtering out the heating beam. The photothermal signal is then the change Δ {\displaystyle \Delta } in the transmitted probe beam power P d {\displaystyle P_{d}} due to the heating laser. To increase the signal-to-noise ratio a lock-in technique may be used. To this end, the heating laser beam is modulated at a high frequency of the order of MHz and the detected probe beam power is then demodulated on the same frequency. For quantitative measurements, the photothermal signal may be normalized to the background detected power P d , 0 {\displaystyle P_{d,0}} (which is typically much larger than the change Δ P d {\displaystyle \Delta P_{d}} ), thereby defining the relative photothermal signal Φ {\displaystyle \Phi }
Φ = Δ P d P d , 0 = P d ( heating beam on ) − P d ( heating beam off ) P d ( background, no particle ) {\displaystyle \Phi ={\frac {\Delta P_{d}}{P_{d,0}}}={\frac {P_{d}\left({\text{heating beam on}}\right)-P_{d}\left({\text{heating beam off}}\right)}{P_{d}\left({\text{background, no particle}}\right)}}}
The physical basis for the photothermal signal in the transmission detection scheme is the lensing action of the refractive index profile that is created upon the absorption of the heating laser power by the nanoparticle. The signal is homodyne in the sense that a steady state difference signal accounts for the mechanism and the forward scattered field's self-interference with the transmitted beam corresponds to an energy redistribution as expected for a simple lens. The lens is a Gradient Refractive INdex (GRIN) particle determined by the 1/r refractive index profile established due to the point-source temperature profile around the nanoparticle. For a nanoparticle of radius R {\displaystyle R} embedded in a homogeneous medium of refractive index n 0 {\displaystyle n_{0}} with a thermorefractive coefficient d n / d T {\displaystyle \mathrm {d} n/\mathrm {d} T} the refractive index profile reads:
n ( r ) = n 0 + d n d T Δ T ( r ) = n 0 + Δ n R r {\displaystyle n\left(\mathbf {r} \right)=n_{0}+{\frac {\mathrm {d} n}{\mathrm {d} T}}\Delta T\left(\mathbf {r} \right)=n_{0}+\Delta n{\frac {R}{r}}}
in which the contrast of the thermal lens is determined by the nanoparticle absorption cross-section σ a b s {\displaystyle \sigma _{\rm {abs}}} at the heating beam wavelength, the heating beam intensity I h {\displaystyle I_{h}} at the point of the particle and the embedding medium's thermal conductivity κ {\displaystyle \kappa } via Δ n = ( d n / d T ) σ a b s I h / 4 π κ R {\displaystyle \Delta n=\left(\mathrm {d} n/\mathrm {d} T\right)\sigma _{\rm {abs}}I_{h}/4\pi \kappa R} .
Although the signal can be well-explained in a scattering framework, the most intuitive description can be found by an intuitive analogy to the Coulomb scattering of wave packets in particle physics.
In this detection scheme a conventional scanning sample or laser-scanning transmission microscope is employed. Both the heating and the probing laser beam are coaxially aligned and
superimposed using a dichroic mirror . Both beams are focused onto a sample, typically via a high-NA illumination microscope objective. Alternatively, the probe-beam may be laterally displaced with respect to the heating beam. The retroreflected probe-beam power is then imaged onto a photodiode and the change as induced by the heating beam provides the photothermal signal
The detection is heterodyne in the sense that the scattered field of the probe beam by the thermal lens interferes in the backwards direction with a well-defined retroreflected part of the incidence probing beam. | https://en.wikipedia.org/wiki/Photothermal_optical_microscopy |
The photothermal ratio (PTR), also named photothermal quotient , is a variable that characterizes the amount of light available to plants relative to the temperature level. It is used in plant biology to characterize the growth environment of plants. [ 1 ]
Both light and temperature are important environmental variables that determine the growth and development of plants. Light is especially important in driving photosynthesis and producing sugars. Temperature is a strong driver of cell division, where available sugars are converted to produce new leaf, stem, root or reproductive biomass. As such, both are important factors – along with nutrient and water availability – in determining the source:sink balance of a plant, the amount of sugar available for plant in relation to its growth potential. The photothermal ratio is a quantitative descriptor that can be used to approximate this balance. [ citation needed ]
The photothermal ratio is calculated by dividing the Daily Light Integral ( photosynthetic photon flux density integrated over a day; DLI) plants are exposed to by a baseline daily temperature(T b ). PTR = DLI / T b . Units are therefore mol quanta m −2 day −1 °C −1 . Alternatively, the number of degree days have been used rather than T b per se, with units of the form mol degree-day −1 . [ 2 ] The PTR concept has been introduced in detailed studies of growth and productivity of a particular species. For these species, a baseline temperature T b is chosen for which it is known than no leaf elongation takes place below that temperature, which for many temperate species will be a temperature around 5 °C. [ citation needed ]
In characterizing the growth environment of a broad range of plants without reference to any specific species, T b has been taken to be zero °C. [ 3 ]
The photothermal ratio is relatively constant over the year in the tropics, with lowland values around 1.3 mol m −2 day −1 °C −1 . At higher latitudes PTR changes with seasons, being high in spring, and low in autumn. Averaged over the growing season, PTR values are around 3 in boreal zones, and around 2 in temperate zones. Plants growing in glasshouses often grow at a PTR of ~1, experiments with Arabidopsis are often carried out at a PTR around 0.2. [ 3 ]
Many effects that have been ascribed to light are actually dependent on temperature as well. For example, strong stem elongation at low light will only take place when temperatures are high, but not when temperatures are close to 0 °C. In wheat , PTR in the month before anthesis strongly determines the number of kernels. [ 4 ] In horticulture, plants grown at a high PTR generally have thicker stems, shorter internodes and more flowers, and therefore have higher marketable yield. [ 2 ] | https://en.wikipedia.org/wiki/Photothermal_ratio |
Photothermal spectroscopy is a group of high sensitivity spectroscopy techniques used to measure optical absorption and thermal characteristics of a sample. The basis of photothermal spectroscopy is the change in thermal state of the sample resulting from the absorption of radiation. Light absorbed and not lost by emission results in heating. The heat raises temperature thereby influencing the thermodynamic properties of the sample or of a suitable material adjacent to it. Measurement of the temperature, pressure, or density changes that occur due to optical absorption are ultimately the basis for the photothermal spectroscopic measurements.
As with photoacoustic spectroscopy , photothermal spectroscopy is an indirect method for measuring optical absorption , because it is not based on the direct measure of the light which is involved in the absorption. In another sense, however, photothermal (and photoacoustic) methods measure directly the absorption, rather than e.g. calculate it from the transmission, as is the case of more usual (transmission) spectroscopic techniques. And it is this fact that gives the technique its high sensitivity, because in transmission techniques the absorbance is calculated as the difference between total light impinging on the sample and the transmitted (plus reflected , plus scattered ) light, with the usual problems of accuracy when one deals with small differences between large numbers, if the absorption is small. In photothermal spectroscopies, instead, the signal is essentially proportional to the absorption, and is zero when there is zero true absorption, even in the presence of reflection or scattering.
There are several methods and techniques used in photothermal spectroscopy. Each of these has a name indicating the specific physical effect measured.
Photothermal deflection spectroscopy is a kind of spectroscopy that measures the change in refractive index due to heating of a medium by light. It works via a sort of " mirage effect" [ 1 ] where a refractive index gradient exists adjacent to the test sample surface. A probe laser beam is refracted or bent in a manner proportional to the temperature gradient of the transparent medium near the surface. From this deflection, a measure of the absorbed excitation radiation can be determined. The technique is useful when studying optically thin samples, because sensitive measurements can be obtained of whether absorption is occurring. It is of value in situations where "pass through" or transmission spectroscopy can't be used. [ citation needed ]
There are two main forms of PDS: Collinear and Transverse. Collinear PDS was introduced in a 1980 paper by A.C. Boccara, D. Fournier, et al. [ 2 ] In collinear, two beams pass through and intersect in a medium. The pump beam heats the material and the probe beam is deflected. This technique only works for transparent media. In transverse, the pump beam heats come in normal to the surface, and the probe beam passes parallel. In a variation on this, the probe beam may reflect off the surface, and measure buckling due to heating. Transverse PDS can be done in Nitrogen, but better performance is gained in a liquid cell: usually an inert, non-absorbing material such as a perfluorocarbon is used. [ citation needed ]
In both collinear and transverse PDS, the surface is heated using a periodically modulated light source, such as an optical beam passing through a mechanical chopper or regulated with a function generator. A lock-in amplifier is then used to measure deflections found at the modulation frequency. Another scheme uses a pulsed laser as the excitation source. In that case, a boxcar average can be used to measure the temporal deflection of the probe beam to the excitation radiation. The signal falls off exponentially as a function of frequency, so frequencies around 1-10 hertz are frequently used. A full theoretical analysis of the PDS system was published by Jackson, Amer, et al. in 1981. [ 3 ] The same paper also discussed the use of PDS as a form of microscopy, called "Photothermal Deflection Microscopy", which can yield information about impurities and the surface topology of materials. [ 3 ]
PDS analysis of thin films can also be performed using a patterned substrate that supports optical resonances, such as guided-mode resonance and whispering-gallery modes. The probe beam is coupled into a resonant mode and the coupling efficiency is highly sensitive to the incidence angle. Due to the photoheating effect, the coupling efficiency is changed and characterized to indicate the thin film absorption. [ 4 ] | https://en.wikipedia.org/wiki/Photothermal_spectroscopy |
Photothermal therapy (PTT) refers to efforts to use electromagnetic radiation (most often in infrared wavelengths) for the treatment of various medical conditions, including cancer . This neurotherapy is an extension of photodynamic therapy , in which a photosensitizer is excited with specific band light. This activation brings the sensitizer to an excited state where it then releases vibrational energy ( heat ), which is what kills the targeted cells.
Unlike photodynamic therapy, photothermal therapy does not require oxygen to interact with the target cells or tissues. Current studies also show that photothermal therapy is able to use longer wavelength light, which is less energetic and therefore less harmful to other cells and tissues.
Most materials of interest currently being investigated for photothermal therapy are on the nanoscale . One of the key reasons behind this is the enhanced permeability and retention effect observed with particles in a certain size range (typically 20 - 300 nm). [ 1 ] Molecules in this range have been observed to preferentially accumulate in tumor tissue. When a tumor forms, it requires new blood vessels in order to fuel its growth; these new blood vessels in/near tumors have different properties as compared to regular blood vessels, such as poor lymphatic drainage and a disorganized, leaky vasculature. These factors lead to a significantly higher concentration of certain particles in a tumor as compared to the rest of the body. [ citation needed ]
Huang et al. investigated the feasibility of using gold nanorods for both cancer cell imaging as well as photothermal therapy. [ 2 ] The authors conjugated antibodies (anti-EGFR monoclonal antibodies) to the surface of gold nanorods, allowing the gold nanorods to bind specifically to certain malignant cancer cells (HSC and HOC malignant cells). After incubating the cells with the gold nanorods, an 800 nm Ti:sapphire laser was used to irradiate the cells at varying powers. The authors reported successful destruction of the malignant cancer cells, while nonmalignant cells were unharmed. [ citation needed ]
When AuNRs are exposed to NIR light, the oscillating electromagnetic field of the light causes the free electrons of the AuNR to collectively coherently oscillate. [ 3 ] Changing the size and shape of AuNRs changes the wavelength that gets absorbed. A desired wavelength would be between 700-1000 nm because biological tissue is optically transparent at these wavelengths. [ 4 ] While all AuNP are sensitive to change in their shape and size, Au nanorods properties are extremely sensitive to any change in any of their dimensions regarding their length and width or their aspect ratio. When light is shone on a metal NP, the NP forms a dipole oscillation along the direction of the electric field. When the oscillation reaches its maximum, this frequency is called the surface plasmon resonance (SPR). [ 3 ] AuNR have two SPR spectrum bands: one in the NIR region caused by its longitudinal oscillation which tends to be stronger with a longer wavelength and one in the visible region caused by the transverse electronic oscillation which tends to be weaker with a shorter wavelength. [ 5 ] The SPR characteristics account for the increase in light absorption for the particle. [ 3 ] As the AuNR aspect ratio increases, the absorption wavelength is redshifted [ 5 ] and light scattering efficiency is increased. [ 3 ] The electrons excited by the NIR lose energy quickly after absorption via electron-electron collisions, and as these electrons relax back down, the energy is released as a phonon that then heats the environment of the AuNP which in cancer treatments would be the cancerous cells. This process is observed when a laser has a continuous wave onto the AuNP. Pulsed laser light beams generally results in the AuNP melting or ablation of the particle. [ 3 ] Continuous wave lasers take minutes rather than a single pulse time for a pulsed laser, continues wave lasers are able to heat larger areas at once. [ 3 ]
Gold nanoshells , coated silica nanoparticles with a thin layer of gold. [ 6 ] have been conjugated to antibodies (anti-HER2 or anti-IgG) via PEG linkers. After incubation of SKBr3 cancer cells with the gold nanoshells, an 820 nm laser was used to irradiate the cells. Only the cells incubated with the gold nanoshells conjugated with the specific antibody (anti-HER2) were damaged by the laser. Another category of gold nanoshells are gold layer on liposomes, as soft template. In this case, drug can also be encapsulated inside and/or in bilayer and the release can be triggered by laser light. [ 7 ]
The failure of clinical translation of nanoparticles-mediated PTT is mainly ascribed to concerns about their persistence in the body. [ 8 ] Indeed, the optical response of anisotropic nanomaterials can be tuned in the NIR region by increasing their size to up to 150 nm. [ 9 ] On the other hand, body excretion of non-biodegradable noble metals nanomaterials above 10 nm occurs through the hepatobiliary route in a slow and inefficient manner. [ 10 ] A common approach to avoid metal persistence is to reduce the nanoparticles size below the threshold for renal clearance, i.e. ultrasmall nanoparticles (USNPs), meanwhile the maximum light-to-heat transduction is for < 5 nm nanoparticles. [ 11 ] On the other hand, the surface plasmon of excretable gold USNPs is in the UV/visible region (far from the first biological windows), severely limiting their potential application in PTT.
Excretion of metals has been combined with NIR-triggered PTT by employing ultrasmall-in-nano architectures composed by metal USNPs embedded in biodegradable silica nanocapsules. [ 12 ] t NAs are the first reported NIR-absorbing plasmonic ultrasmall-in-nano platforms that jointly combine: i) photothermal conversion efficacy suitable for hyperthermia, ii) multiple photothermal sequences and iii) renal excretion of the building blocks after the therapeutic action. [ 12 ] [ 13 ] [ 14 ] Nowadays, tNAs therapeutic effect has been assessed on valuable 3D models of human pancreatic adenocarcinoma. [ 12 ]
Graphene is viable for photothermal therapy. [ 15 ] An 808 nm laser at a power density of 2 W/cm 2 was used to irradiate the tumor sites on mice for 5 minutes. As noted by the authors, the power densities of lasers used to heat gold nanorods range from 2 to 4 W/cm 2 . Thus, these nanoscale graphene sheets require a laser power on the lower end of the range used with gold nanoparticles to photothermally ablate tumors. [ citation needed ]
In 2012, Yang et al. incorporated the promising results regarding nanoscale reduced graphene oxide reported by Robinson et al. into another in vivo mice study. [ 16 ] < [ 17 ] The therapeutic treatment used in this study involved the use of nanoscale reduced graphene oxide sheets, nearly identical to the ones used by Robinson et al. (but without any active targeting sequences attached). Nanoscale reduced graphene oxide sheets were successfully irradiated in order to completely destroy the targeted tumors. Most notably, the required power density of the 808 nm laser was reduced to 0.15 W/cm 2 , an order of magnitude lower than previously required power densities. This study demonstrates the higher efficacy of nanoscale reduced graphene oxide sheets as compared to both nanoscale graphene sheets and gold nanorods. [ citation needed ]
PTT utilizes photothermal transduction agents (PTAs) which can transform light energy to heat through photothermal effect to raise the temperature of tumor area and thus cause the ablation of tumor cells. [ 18 ] [ 19 ] Specifically, ideal PTAs should have high photothermal conversion efficiency (PCE), excellent optical stability and biocompatibility , and strong light adsorption in the near-infrared (NIR) region (650-1350 nm) due to the deep-tissue penetration and minimal absorption of NIR light in the biological tissues. [ 18 ] [ 19 ] PTAs mainly include inorganic materials and organic materials. [ 19 ] Inorganic PTAs, such as noble metal materials, carbon-based nanomaterials, and other 2D materials , have high PCE and excellent photostability , but they are not biodegradable and thus have potential long-term toxicity in vivo. [ 19 ] [ 20 ] Organic PTAs including small molecule dyes and conjugated polymers (CPs) have good biocompatibility and biodegradability, but poor photostability. [ 19 ] Among them, small molecule dyes, such as cyanine , porphyrin , phthalocyanine , are limited in the field of cancer treatment because of their susceptibility to photobleaching and poor tumor enrichment ability. [ 19 ] Conjugated polymers with large π−π conjugated skeleton and a high electron delocalization structure show potential for PTT due to their strong NIR absorption, excellent photostability , low cytotoxicity , outstanding PCE, good dispersibility in aqueous medium, increased accumulation at tumor site, and long blood circulation time. [ 18 ] [ 19 ] [ 20 ] [ 21 ] Moreover, conjugated polymers can be easily combined with other imaging agents and drugs to construct multifunctional nanomaterials for selective and synergistic cancer therapy. [ 18 ]
The CPs used for tumor PTT mainly include polyaniline (PANI), polypyrrole (PPy), polythiophene (PTh), polydopamine (PDA), donor−acceptor (D-A) conjugated polymers, and poly(3,4-ethylenedioxythiophene):poly(4-styrenesulfonate) ( PEDOT:PSS ). [ 18 ] [ 19 ]
The nonradiative process for heat generation of organic PTAs is different from that of inorganic PTAs such as metals and semiconductors which is related with surface plasmon resonance . [ 22 ] As shown in the figure, conjugated polymers are first activated to the excited state (S1) under light irradiation and then excited state (S1) decays back to the ground state (S0) via three processes: (I) emitting a photon ( fluorescence ), (II) intersystem crossing , and (III) nonradiative relaxation (heat generation). [ 22 ] Because these three pathways of the S1 decaying back to the S0 are usually competitive in photosensitive materials, light emitting and intersystem crossing must be efficiently reduced in order to increase the heat generation and improve the photothermal conversion efficiency. [ 18 ] [ 22 ] For conjugated polymers, on the one hand, their unique structures lead to closed stacking of the molecular sensitizers with highly frequent intermolecular collisions which can efficiently quench the fluorescence and intersystem crossing, and thus enhance the yield of nonradiative relaxation. [ 22 ] On the other hand, compared with monomeric phototherapeutic molecules, conjugated polymers possess higher stability in vivo against disassembly and photobleaching , longer blood circulation time, and more accumulation at tumor site due to the enhanced permeability and retention (EPR) effect . [ 22 ] Therefore, conjugated polymers have high photothermal conversion efficiency and a large amount of heat generation. One of the most widely used equations to calculate photothermal conversion efficiency (η) of organic PTAs is as follows:
η = (hAΔΤ max -Qs)/I(1-10 -Aλ )
where h is the heat transfer coefficient, A is the container surface area, ΔΤ max means the maximum temperature change in the solution, A λ means the light absorbance, I is the laser power density, and Qs is the heat associated with the light absorbance of the solvent. [ 23 ]
Furthermore, various efficient methods, especially donor-acceptor (D-A) strategy, have been designed to enhance the photothermal conversion efficiency and heat generation of conjugated polymers. [ 18 ] The D-A assembly system in the conjugated polymers contributes to strong intermolecular electron transfer from the donor to the acceptor, thus bringing efficient fluorescence and intersystem crossing quenching, and improved heat generation. [ 22 ] In addition, the HOMO-LUMO gap of the D−A conjugated polymers can be easily tuned through changing the selection of electron donor (ED) and electron acceptor (EA) moieties, and thus D−A structured polymers with extremely low band gap can be developed to improve the NIR absorption and photothermal conversion efficiency of CPs. [ 19 ] [ 21 ]
Polyaniline (PANI) is one of the earliest types of conjugated polymers reported for tumor PTT. [ 19 ] [ 24 ] [ 20 ] [ 21 ] [ 25 ] [ 26 ]
Polypyrrole (PPy) is suited for PTT applications because of its strong NIR absorbance, large PCE, stability, and biocompatibility. [ 21 ] In vivo experiments show that tumors treated with PPy NPs could be effectively eliminated under the irradiation of an 808 nm laser (1 W cm −2 , 5 min). [ 27 ] PPy nanosheets exhibit promising photothermal ablation ability toward cancer cells in the NIR II window for deep-tissue PTT. [ 28 ]
PPy nanoparticles and its derivative nanomaterials can also be combined with imaging contrast agents and diverse drugs to construct multifunctional theranostic applications in imaging-guided PTT and synergistic treatment, including fluorescent imaging, magnetic resonance imaging (MRI), photoacoustic imaging (PA), computed tomography (CT), photodynamic therapy (PDT), chemotherapy, etc. [ 19 ] For example, PPy has been used to encapsulate ultrasmall iron oxide nanoparticles (IONPs) and finally develop IONP@PPy NPs for in vivo MR and PA imaging-guided PTT. [ 29 ] Polypyrrole (I-PPy) nanocomposites have been investigated for CT imaging-guided tumor PTT. [ 30 ]
Polythiophene (PTh) and its derivatives-based polymers are also one kind of conjugated polymers for PTT. Polythiophene-based polymers usually exhibit excellent photostability , large light-harvesting ability, easy synthesis, and facile functionalization with different substituents. [ 21 ]
Conjugated copolymer (C3) with promising photothermal properties can be prepared by linking 2-N,N′-bis(2-(ethyl)hexyl)-perylene-3,4,9,10-tetra-carboxylic acid bis-imide to a thienylvinylene oligomer. C3 was coprecipitated with PEG-PCL and indocyanine green (ICG) to obtain PEG-PCL-C3-ICG nanoparticles for fluorescence-guided photothermal/photodynamic therapy against oral squamous cell carcinoma (OSCC). [ 31 ] A biodegradable PLGA-PEGylated DPPV (poly{2,2′-[(2,5-bis(2-hexyldecyl)-3,6-dioxo-2,3,5,6-tetrahydropyrrolo[3,4-c]-pyrrole-1,4-diyl)-dithiophene]-5,5′-diyl-alt-vinylene) conjugated polymer for PA-guided PTT with PCE 71% (@ 808 nm, 0.3 W cm−2). The vinylene bonds in the main chain improves the biodegradability, biocompatibility and photothermal conversion efficiency of CPs. [ 32 ]
Dopamine is one of neurotransmitters in the body which helps cells send impulses. Polydopamine (PDA) is obtained through the self-aggregation of dopamine to form a melanin -like substance under mild alkaline conditions. [ 33 ] PDA has strong NIR absorption, good photothermal stability, excellent biocompatibility and biodegradability , and high photothermal conversion efficiency. [ 34 ] Furthermore, with π conjugated structure and different active groups, PDA can be easily combined with various materials to achieve multifunction, such as fluorescence imaging , MRI , CT , PA, targeted therapy etc. [ 19 ] In view of this, PDA and its composite nanomaterials have a broad application prospect in the biomedical field. [ citation needed ]
Dopamine-melanin colloidal nanospheres is an efficient near-infrared photothermal therapeutic agent for in vivo cancer therapy. [ 23 ] PDA can also be modified on the surface of other PTAs, such as gold nanorods, carbon-based materials, to enhance the photothermal stability and efficiency in vivo. [ 19 ] For example, PDA-modified spiky gold nanoparticles (SGNP@PDAs) have been investigated for chemo-photothermal therapy. [ 35 ]
Donor−acceptor (D−A) conjugated polymers have been investigated for the medicinal purposes. Nano-PCPDTBT CPs have two moieties: 2-ethylhexyl cyclopentadithiophene and 2,1,3-benzothiadiazole. When the PCPDTBT nanoparticle solution (0.115 mg/mL) was exposed to an 808 nm NIR laser (0.6 W/cm 2 ), the temperature could be increased by more than 30 °C. [ 36 ] Wang et al. designed four NIR-absorbing D-A structured conjugated polymer dots (Pdots) containing diketopyrrolo-pyrrole (DPP) and thiophene units as effective photothermal materials with the PCE up to 65% for in vivo cancer therapy. [ 37 ] Zhang et al. constructed PBIBDF-BT D-A CPs by using isoindigo derivative (BIBDF) and bithiophene (BT) as EA and ED respectively. PBIBDF-BT was further modified with poly(ethylene glycol)-block-poly(hexyl ethylene phosphate) (mPEG-b-PHEP) to obtain PBIBDF-BT@NP PPE with PCE of 46.7% and high stability in physiological environment. [ 38 ] Yang’s group designed PBTPBF-BT CPs, in which the bis(5-oxothieno[3,2-b]pyrrole-6-ylidene)-benzodifurandione (BTPBF) and the 3,3′-didodecyl-2,2′-bithiophene (BT) units acting as EA and ED respectively. The D-A CPs have a maximum absorption peak at 1107 nm and a relative high photothermal conversion efficiency (66.4%). [ 39 ] Pu et al. synthesized PC70BM-PCPDTBT D-A CPs via nanoprecipitation of EA (6,6)-phenyl-C71-butyric acid methyl ester (PC70BM) and ED PCPDTBT (SPs) for PA-guided PTT. [ 40 ] Wang et al. developed D-A CPs TBDOPV-DT containing thiophene-fused benzodifurandione-based oligo(p-phenylenevinylene) (TBDOPV) as EA unit and 2,2′-bithio-phene (DT) as ED unit. TBDOPV-DT CPs have a strong absorption at 1093 nm and achieve highly efficient NIR-II photothermal conversion. [ 41 ]
Poly(3,4-ethylenedioxythiophene):poly(4-styrenesulfonate) (PEDOT:PSS) is often used in organic electronics and have strong NIR absorption. In 2012, Liu’s group first reported PEGylated PEDOT:PSS polymeric nanoparticle (PEDOT:PSS-PEG) for near-infrared photothermal therapy of cancer. PEDOT:PSS-PEG nanoparticles have high stability in vivo and long blood circulation half-life of 21.4 ± 3.1 h. The PTT in animals showed no appreciable side effects for the tested dose and an excellent therapeutic efficacy under the 808 nm laser irradiation. [ 42 ] Kang et al. synthesized magneto-conjugated polymer core−shell MNP@PEDOT:PSS nanoparticles for multimodal imaging-guided PTT. [ 43 ] Furthermore, PEDOT:PSS NPs can not only serve as PTAs but also as a drug carrier to load various types of drugs, such as SN38, chemotherapy drugs DOX and photodynamic agent chlorin e6 (Ce6), thus achieving synergistic cancer therapy. [ 44 ] | https://en.wikipedia.org/wiki/Photothermal_therapy |
Photothermal time ( PTT ) is a product between growing degree-days (GDD) and day length (hours) for each day. PTT = GDD × DL [ 1 ] It can be used to quantify environment , [ 2 ] as well as the timing of developmental stages of plants . [ 3 ]
This standards - or measurement -related article is a stub . You can help Wikipedia by expanding it .
This ecology -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Photothermal_time |
Phototoxicity , also called photoirritation , is a chemically induced skin irritation, requiring light, that does not involve the immune system . [ 1 ] It is a type of photosensitivity . [ 1 ] [ 2 ]
The skin response resembles an exaggerated sunburn . The involved chemical may enter into the skin by topical administration , or it may reach the skin via systemic circulation following ingestion or parenteral administration. The chemical needs to be "photoactive," which means that when it absorbs light, the absorbed energy produces molecular changes that cause toxicity. Many synthetic compounds, including drug substances like tetracyclines or fluoroquinolones , are known to cause these effects. Surface contact with some such chemicals causes photodermatitis , and many plants cause phytophotodermatitis . Light-induced toxicity is a common phenomenon in humans ; however, it also occurs in other animals.
A phototoxic substance is a chemical compound which becomes toxic when exposed to light.
Phototoxicity is a quantum chemical phenomenon. Phototoxins are molecules with a conjugated system , often an aromatic system . They have a low-lying excited state that can be reached by excitation with visible light photons. This state can undergo intersystem crossing with neighboring molecules in tissue, converting them to toxic free radicals . These rapidly attack nearby molecules, killing cells. A typical radical is singlet oxygen , produced from regular triplet oxygen . Because free radicals are highly reactive, the damage is limited to the body part illuminated.
3T3 Neutral Red Phototoxicity Test – An in vitro toxicological assessment test used to determine the cytotoxicity and photo(cyto)toxicity effect of a test article to murine fibroblasts in the presence or absence of UVA light .
"The 3T3 Neutral Red Uptake Phototoxicity Assay (3T3 NRU PT) can be utilized to identify the phototoxic effect of a test substance induced by the combination of test substance and light. The test compares the cytotoxic effect of a test substance when tested after the exposure, then tested in the absence of exposure to a non-cytotoxic dose of UVA/vis light. Cytotoxicity is expressed as a concentration-dependent reduction of the uptake of the vital dye - Neutral Red .
Substances that are phototoxic in vivo after systemic application and distribution to the skin, as well as compounds that could act as phototoxicants after topical application to the skin can be identified by the test. The reliability and relevance of the 3T3 NRU PT have been evaluated, and the test has been shown to be predictive when compared with acute phototoxicity effects in vivo in animals and humans." Taken with permission from [1]
Several health authorities have issued related guidance documents, which need to be considered for drug development :
When performing microscopy on live samples, one needs to be aware that too high light dose can damage or kill the specimens and lead to experimental artefacts. This is particularly important in confocal and super-resolution microscopy. [ 7 ] [ 8 ] | https://en.wikipedia.org/wiki/Phototoxicity |
Phototrophs (from Ancient Greek φῶς , φωτός ( phôs, phōtós ) ' light ' and τροφή ( trophḗ ) ' nourishment ' ) are organisms that carry out photon capture to produce complex organic compounds (e.g. carbohydrates ) and acquire energy. They use the energy from light to carry out various cellular metabolic processes. It is a common misconception that phototrophs are obligatorily photosynthetic . Many, but not all, phototrophs often photosynthesize: they anabolically convert carbon dioxide into organic material to be utilized structurally, functionally, or as a source for later catabolic processes (e.g. in the form of starches, sugars and fats). All phototrophs either use electron transport chains or direct proton pumping to establish an electrochemical gradient which is utilized by ATP synthase , to provide the molecular energy currency for the cell. Phototrophs can be either autotrophs or heterotrophs . If their electron and hydrogen donors are inorganic compounds (e.g., Na 2 S 2 O 3 , as in some purple sulfur bacteria , or H 2 S , as in some green sulfur bacteria ) they can be also called lithotrophs , and so, some photoautotrophs are also called photolithoautotrophs. Examples of phototroph organisms are Rhodobacter capsulatus , Chromatium , and Chlorobium .
Originally used with a different meaning, the term took its current definition after Lwoff and collaborators (1946). [ 1 ] [ 2 ]
Most of the well-recognized phototrophs are autotrophic , also known as photoautotrophs , and can fix carbon . They can be contrasted with chemotrophs that obtain their energy by the oxidation of electron donors in their environments. Photoautotrophs are capable of synthesizing their own food from inorganic substances using light as an energy source. Green plants and photosynthetic bacteria are photoautotrophs. Photoautotrophic organisms are sometimes referred to as holophytic . [ 3 ]
Oxygenic photosynthetic organisms use chlorophyll for light-energy capture and oxidize water, "splitting" it into molecular oxygen.
In an ecological context, phototrophs are often the food source for neighboring heterotrophic life. In terrestrial environments, plants are the predominant variety, while aquatic environments include a range of phototrophic organisms such as algae (e.g., kelp ), other protists (such as euglena ), phytoplankton , and bacteria (such as cyanobacteria ).
Cyanobacteria, which are prokaryotic organisms which carry out oxygenic photosynthesis, occupy many environmental conditions, including fresh water, seas, soil , and lichen . Cyanobacteria carry out plant-like photosynthesis because the organelle in plants that carries out photosynthesis is derived from an [ 4 ] endosymbiotic cyanobacterium. [ 5 ] This bacterium can use water as a source of electrons in order to perform CO 2 reduction reactions.
A photolithoautotroph is an autotrophic organism that uses light energy, and an inorganic electron donor (e.g., H 2 O, H 2 , H 2 S), and CO 2 as its carbon source.
In contrast to photoautotrophs, photoheterotrophs are organisms that depend solely on light for their energy and principally on organic compounds for their carbon. Photoheterotrophs produce ATP through photophosphorylation but use environmentally obtained organic compounds to build structures and other bio-molecules. [ 6 ]
Most phototrophs use chlorophyll or the related bacteriochlorophyll to capture light and are known as chlorophototrophs . Others, however, use retinal and are retinalophototrophs . [ 7 ] | https://en.wikipedia.org/wiki/Phototroph |
Phototrophic biofilms are microbial communities generally comprising both phototrophic microorganisms, which use light as their energy source, and chemoheterotrophs. [ 1 ] Thick laminated multilayered phototrophic biofilms are usually referred to as microbial mats or phototrophic mats (see also biofilm ). [ 2 ] These organisms, which can be prokaryotic or eukaryotic organisms like bacteria , cyanobacteria , fungi , and microalgae , make up diverse microbial communities that are affixed in a mucous matrix, or film. These biofilms occur on contact surfaces in a range of terrestrial and aquatic environments. The formation of biofilms is a complex process and is dependent upon the availability of light as well as the relationships between the microorganisms. Biofilms serve a variety of roles in aquatic, terrestrial, and extreme environments; these roles include functions which are both beneficial and detrimental to the environment. In addition to these natural roles, phototrophic biofilms have also been adapted for applications such as crop production and protection, bioremediation , and wastewater treatment . [ 1 ] [ 2 ]
Biofilm formation is a complicated process which occurs in four general steps: attachment of cells, formation of the colony, maturation, and cell dispersal. These films can grow in sizes ranging from microns to centimeters in thickness. Most are green and/or brown, but can be more colorful. [ 1 ]
Biofilm development is dependent on the generation of extracellular polymeric substances (EPS) by microorganisms. The EPS, which is akin to a gel, is a matrix which provides structure for the biofilm and is essential for growth and functionality. It consists of organic compounds such as polysaccharides, proteins, and glycolipids and may also include inorganic substances like silt and silica. EPS join cells together in the biofilm and transmits light to organisms in the lower zone. Additionally, EPS serves as an adhesive for surface attachment and facilitates digestion of nutrients by extracellular enzymes. [ 1 ]
Microbial functions and interactions are also important for maintaining the well-being of the community. In general, phototrophic organisms in the biofilm provide a foundation for the growth of the community as a whole by mediating biofilm processes and conversions. The chemoheterotrophs use the photosynthetic waste products from the phototrophs as their carbon and nitrogen sources, and in turn perform nutrient regeneration for the community. [ 1 ] [ 2 ] Various groups of organisms are located in distinct layers based on availability of light, the presence of oxygen, and redox gradients produced by the species. [ 2 ] Light exposure early in biofilm development has an immense impact on growth and microbial diversity; greater light availability promotes more growth. Phototrophs such as cyanobacteria and green algae occupy the exposed layer of the biofilm while lower layers consist of anaerobic phototrophs and heterotrophs like bacteria, protozoa, and fungi. [ 1 ] Eukaryotic algae and cyanobacteria in the outer portion use light energy to reduce carbon dioxide , providing organic substrates and oxygen . This photosynthetic activity fuels processes and conversions in the total biofilm community, including the heterotrophic fraction. It also produces an oxygen gradient in the mat which inhibits most anaerobic phototrophs and chemotrophs from growing in the upper regions. [ 2 ]
Communication between the microorganisms is facilitated by quorum sensing or signal transduction pathways, which are accomplished through the secretion of molecules which diffuse through the biofilm. The identity of these substances varies depending on the type of microorganism from which it was secreted. [ 1 ]
While some of the organisms contributing to the formation of the biofilms can be identified, exact composition of the biofilms is difficult to determine because many of the organisms cannot be grown using pure culture methods. Though pure culture methods cannot be used to identify unculturable microorganisms and do not support the study of the complex interactions between photoautotrophs and heterotrophs, the use of metagenomics , proteomics , and transcriptomics has helped characterize these unculturable organisms and has provided some insight into molecular mechanisms, microbial organization, and interactions in biofilms. [ 1 ]
Phototrophic biofilms can be found on terrestrial and aquatic surfaces and can withstand environmental fluctuations and extreme environments. In aquatic systems, biofilms are prevalent on surfaces of rocks and plants, and in terrestrial environments they can be located in the soil, on rocks, and on buildings. [ 1 ] Phototrophic biofilms and microbial mats have been described in extreme environments like thermal springs, [ 3 ] hyper saline ponds, [ 4 ] desert soil crusts, and in lake ice covers in Antarctica. The 3.4-billion-year fossil record of benthic phototrophic communities, such as microbial mats and stromatolites , indicates that these associations represent the Earth's oldest known ecosystems. It is thought that these early ecosystems played a key role in the build-up of oxygen in the Earth's atmosphere . [ 5 ]
A diverse array of roles is played by these microorganisms across the range of environments in which they can be found. In aquatic environments, these microbes are primary producers, a critical part of the food chain. They perform a key function in exchanging a substantial amount of nutrients and gases between the atmospheric and oceanic reservoirs. Biofilms in terrestrial systems can contribute to improving soil, reducing erosion, promoting growth of vegetation, and revitalizing desert-like land, but they can also accelerate the degradation of solid structures like buildings and monuments. [ 1 ]
There is a growing interest in the application of phototrophic biofilms, for instance in wastewater treatment in constructed wetlands , bioremediation , agriculture , and biohydrogen production. [ 2 ] A few are outlined below.
Agrochemicals such as pesticides , fertilizers , and food hormones are widely used to produce greater quality and quantity of food as well as provide crop protection. However, biofertilizers have been developed as a more environmentally cognizant method of assisting in plant development and protection by promoting the growth of microorganisms such as cyanobacteria. Cyanobacteria can augment plant growth by colonizing on plant roots to supply carbon and nitrogen, which they can provide to plants through the natural metabolic processes of carbon dioxide and nitrogen fixation . They can also produce substances which induce plant defense against harmful fungi, bacteria, and viruses. Other organisms can also produce secondary metabolites such as phytohormones which increase plants' resistance to pests and disease. [ 1 ] Promoting growth of phototrophic biofilms in agricultural settings improves the quality of the soil and water retention, reduces salinity, and protects against erosion . [ 2 ]
Organisms in mats such as cyanobacteria, sulfate reducers, and aerobic heterotrophs can aid in bioremediation of water systems through biodegradation of oils. [ 2 ] This is achieved by freeing oxygen, organic compounds, and nitrogen from hydrocarbon pollutants. Biofilm growth can also degrade other pollutants by oxidizing oils, pesticides, and herbicides and reducing heavy metals like copper, lead, and zinc. Aerobic processes to degrade pollutants can be achieved during the day and anaerobic processes are performed at night by biofilms. [ 1 ] Additionally, because biofilm response to pollutants during initial exposure suggested acute toxicity, biofilms can be used as sensors for pollution. [ 2 ]
Biofilms are used in wastewater treatment facilities and constructed wetlands for processes such as cleaning pesticide and fertilizer-laden water because it is simple to form flocs, or aggregates, using biofilms as compared to other floc materials. [ 1 ] [ 2 ] There are also many other benefits to using phototrophic biofilms in treating wastewater, particularly in nutrient removal. The organisms can sequester nutrients from the wastewater and use these along with carbon dioxide to build biomass. The biomass can capture nitrogen, which can be extracted and used in fertilizer production. [ 2 ] Due to their quick growth, phototrophic biofilms have greater nutrient uptake than other methods of nutrient removal utilizing algal biomass, and they are easier to harvest because they naturally grow on wastewater pond surfaces. [ 6 ]
Phototrophic activity of these films can precipitate dissolved phosphates due to an increase in pH; these phosphates are then removed by assimilation. Increase in pH of the wastewater also minimizes the presence of coliform bacteria. [ 2 ]
Heavy metal detoxification in wastewater treatment can also be achieved with these microbes primarily through passive mechanisms such as ion exchange , chelation , adsorption , and diffusion , which constitute biosorption . The active mode is known as bioaccumulation . Biosorption-mediated metal detoxification is influenced by factors including light intensity, pH, density of the biofilm, and organism tolerance of heavy metals. Though biosorption is an efficient process and inexpensive, methods to retrieve heavy metals from the biomass after biosorption still need further development. [ 2 ]
Using phototrophic biofilms for wastewater treatment is more energy efficient and economical and has the capability of producing byproducts which can be further processed into biofuels. [ 1 ] Specifically cyanobacteria are capable of producing biohydrogen, which is an alternative to fossil fuels and may become a viable source of renewable energy. [ 2 ] | https://en.wikipedia.org/wiki/Phototrophic_biofilm |
Phototropins are blue light photoreceptor proteins (more specifically, flavoproteins ) that mediate phototropism responses across many species of algae, [ 1 ] fungi and higher plants . [ 2 ] Phototropins can be found throughout the leaves of a plant. Along with cryptochromes and phytochromes they allow plants to respond and alter their growth in response to the light environment. When phototropins are hit with blue light, they induce a signal transduction pathway that alters the plant cells' functions in different ways.
Phototropins are part of the phototropic sensory system in plants that causes various environmental responses in plants. Phototropins specifically will cause stems to bend towards light [ 3 ] and stomata to open. [ 4 ] In addition phototropins mediate the first changes in stem elongation in blue light prior to cryptochrome activation. [ 5 ] Phototropins are also required for blue light mediated transcript destabilization of specific mRNAs in the cell. [ 6 ]
Phototropins also regulate the movement of chloroplasts within the cell, [ 7 ] [ 8 ] notably chloroplast avoidance. It was thought that this avoidance serves a protective function to avoid damage from intense light, [ 9 ] however an alternate study argues that the avoidance response is primarily to increase light penetration into deeper mesophyll layers in high light conditions. [ 10 ] Phototropins may also be important for the opening of stomata . [ 11 ]
Phototropins have two distinct light, oxygen, or voltage regulated domains (LOV1, LOV2) that each bind flavin mononucleotide (FMN). [ 13 ] The FMN is noncovalently bound to a LOV domain in the dark, but becomes covalently linked upon exposure to suitable light. [ 13 ] The formation of the bond is reversible once light is no longer present. [ 13 ] The forward reaction with light is not dependent on temperature, though low temperatures give increased stability of the covalent linkage, leading to a slower reversal reaction. [ 13 ]
Light excitation will lead to a conformational change within the protein, which allows for kinase activity. [ 14 ] There is also evidence to suggest that phototropins undergo autophosphorylation at various sites across the enzyme. [ 13 ] Phototropins trigger signaling responses within the cell, but it is unknown which proteins are phosphorylated by phototropins, or exactly how the autophosphorylation events play a role in signaling. [ 13 ]
Phototropins are typically found on the plasma membrane , but some phototropins have been found in substantial quantities on chloroplast membranes. [ 15 ] One study found that phototropins on the plasma membrane play a role in phototropism, leaf flattening, stomatal opening, and chloroplast movements, while phototropins on the chloroplasts only partially affected stomatal opening and chloroplast movement, [ 16 ] suggesting that the location of the protein in the cell may also play a role in its signaling function.
This membrane protein –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Phototropin |
In biology , phototropism is the growth of an organism in response to a light stimulus . Phototropism is most often observed in plants , but can also occur in other organisms such as fungi . The cells on the plant that are farthest from the light contain a hormone called auxin that reacts when phototropism occurs. This causes the plant to have elongated cells on the furthest side from the light. Phototropism is one of the many plant tropisms , or movements, which respond to external stimuli. Growth towards a light source is called positive phototropism , while growth away from light is called negative phototropism . Negative phototropism is not to be confused with skototropism, which is defined as the growth towards darkness, whereas negative phototropism can refer to either the growth away from a light source or towards the darkness. [ 1 ] Most plant shoots exhibit positive phototropism, and rearrange their chloroplasts in the leaves to maximize photosynthetic energy and promote growth. [ 2 ] [ 3 ] Some vine shoot tips exhibit negative phototropism, which allows them to grow towards dark, solid objects and climb them. The combination of phototropism and gravitropism allow plants to grow in the correct direction. [ 4 ]
There are several signaling molecules that help the plant determine where the light source is coming from, and these activate several genes, which change the hormone gradients allowing the plant to grow towards the light. The very tip of the plant is known as the coleoptile , which is necessary in light sensing. [ 2 ] The middle portion of the coleoptile is the area where the shoot curvature occurs. The Cholodny–Went hypothesis , developed in the early 20th century, predicts that in the presence of asymmetric light, auxin will move towards the shaded side and promote elongation of the cells on that side to cause the plant to curve towards the light source. [ 5 ] Auxins activate proton pumps, decreasing the pH in the cells on the dark side of the plant. This acidification of the cell wall region activates enzymes known as expansins which disrupt hydrogen bonds in the cell wall structure, making the cell walls less rigid. In addition, increased proton pump activity leads to more solutes entering the plant cells on the dark side of the plant, which increases the osmotic gradient between the symplast and apoplast of these plant cells. [ 6 ] Water then enters the cells along its osmotic gradient, leading to an increase in turgor pressure. The decrease in cell wall strength and increased turgor pressure above a yield threshold [ 7 ] causes cells to swell, exerting the mechanical pressure that drives phototropic movement.
Proteins encoded by a second group of genes, PIN genes, have been found to play a major role in phototropism. They are auxin transporters, and it is thought that they are responsible for the polarization of auxin location. Specifically PIN3 has been identified as the primary auxin carrier. [ 8 ] It is possible that phototropins receive light and inhibit the activity of PINOID kinase (PID), which then promotes the activity of PIN3 . This activation of PIN3 leads to asymmetric distribution of auxin, which then leads to asymmetric elongation of cells in the stem. pin3 mutants had shorter hypocotyls and roots than the wild-type, and the same phenotype was seen in plants grown with auxin efflux inhibitors. [ 9 ] Using anti-PIN3 immunogold labeling, movement of the PIN3 protein was observed. PIN3 is normally localized to the surface of hypocotyl and stem, but is also internalized in the presence of Brefeldin A (BFA), an exocytosis inhibitor. This mechanism allows PIN3 to be repositioned in response to an environmental stimulus. PIN3 and PIN7 proteins were thought to play a role in pulse-induced phototropism. The curvature responses in the "pin3" mutant were reduced significantly, but only slightly reduced in "pin7" mutants. There is some redundancy among "PIN1", "PIN3", and "PIN7", but it is thought that PIN3 plays a greater role in pulse-induced phototropism. [ 10 ]
There are phototropins that are highly expressed in the upper region of coleoptiles. There are two main phototropism they are phot1 and phot2. phot2 single mutants have phototropic responses like that of the wild-type, but phot1 phot2 double mutants do not show any phototropic responses. [ 4 ] The amounts of PHOT1 and PHOT2 present are different depending on the age of the plant and the intensity of the light. There is a high amount of PHOT2 present in mature Arabidopsis leaves and this was also seen in rice orthologs. The expression of PHOT1 and PHOT2 changes depending on the presence of blue or red light. There was a downregulation of PHOT1 mRNA in the presence of light, but upregulation of PHOT2 transcript. The levels of mRNA and protein present in the plant were dependent upon the age of the plant. This suggests that the phototropin expression levels change with the maturation of the leaves. [ 11 ] Mature leaves contain chloroplasts that are essential in photosynthesis. Chloroplast rearrangement occurs in different light environments to maximize photosynthesis. There are several genes involved in plant phototropism including the NPH1 and NPL1 gene. They are both involved in chloroplast rearrangement. [ 3 ] The nph1 and npl1 double mutants were found to have reduced phototropic responses. In fact, the two genes are both redundant in determining the curvature of the stem.
Recent studies reveal that multiple AGC kinases, except for PHOT1 and PHOT2, are involved in plant phototropism. Firstly, PINOID, exhibiting a light-inducible expression pattern, determines the subcellular relocation of PIN3 during phototropic responses via a direct phosphorylation. Secondly, D6PK and its D6PKL homologs modulates the auxin transport activity of PIN3, likely through phosphorylation as well. Third, upstream of D6PK/D6PKLs, PDK1.1 and PDK1.2 acts an essential activator for these AGC kinases. Interestingly, different AGC kinases might participate in different steps during the progression of a phototropic response. D6PK/D6PKLs exhibit an ability to phosphorylate more phosphosites than PINOID.
In 2012, Sakai and Haga [ 12 ] outlined how different auxin concentrations could be arising on shaded and lighted side of the stem, giving birth to phototropic response. Five models in respect to stem phototropism have been proposed, using Arabidopsis thaliana as the study plant.
In the first model incoming light deactivates auxin on the light side of the plant allowing the shaded part to continue growing and eventually bend the plant over towards the light. [ 12 ]
In the second model light inhibits auxin biosynthesis on the light side of the plant, thus decreasing the concentration of auxin relative to the unaffected side. [ 12 ]
In the third model there is a horizontal flow of auxin from both the light and dark side of the plant. Incoming light causes more auxin to flow from the exposed side to the shaded side, increasing the concentration of auxin on the shaded side and thus more growth occurring. [ 12 ]
In the fourth model it shows the plant receiving light to inhibit auxin basipetal down to the exposed side, causing the auxin to only flow down the shaded side. [ 12 ]
Model five encompasses elements of both model 3 and 4. The main auxin flow in this model comes from the top of the plant vertically down towards the base of the plant with some of the auxin travelling horizontally from the main auxin flow to both sides of the plant. Receiving light inhibits the horizontal auxin flow from the main vertical auxin flow to the irradiated exposed side. And according to the study by Sakai and Haga, the observed asymmetric auxin distribution and subsequent phototropic response in hypocotyls seems most consistent with this fifth scenario. [ 12 ]
Phototropism in plants such as Arabidopsis thaliana is directed by blue light receptors called phototropins . [ 13 ] Other photosensitive receptors in plants include phytochromes that sense red light [ 14 ] and cryptochromes that sense blue light. [ 15 ] Different organs of the plant may exhibit different phototropic reactions to different wavelengths of light. Stem tips exhibit positive phototropic reactions to blue light, while root tips exhibit negative phototropic reactions to blue light. Both root tips and most stem tips exhibit positive phototropism to red light. [ citation needed ] Cryptochromes are photoreceptors that absorb blue/ UV-A light, and they help control the circadian rhythm in plants and timing of flowering. Phytochromes are photoreceptors that sense red/far-red light, but they also absorb blue light; they can control flowering in adult plants and the germination of seeds, among other things. The combination of responses from phytochromes and cryptochromes allow the plant to respond to various kinds of light. [ 16 ] Together phytochromes and cryptochromes inhibit gravitropism in hypocotyls and contribute to phototropism. [ 2 ] | https://en.wikipedia.org/wiki/Phototropism |
The photovoltaic effect is the generation of voltage and electric current in a material upon exposure to light . It is a physical phenomenon. [ 1 ]
The photovoltaic effect is closely related to the photoelectric effect . For both phenomena, light is absorbed, causing excitation of an electron or other charge carrier to a higher-energy state. The main distinction is that the term photoelectric effect is now usually used when the electron is ejected out of the material (usually into a vacuum) and photovoltaic effect used when the excited charge carrier is still contained within the material. In either case, an electric potential (or voltage) is produced by the separation of charges, and the light has to have a sufficient energy to overcome the potential barrier for excitation. The physical essence of the difference is usually that photoelectric emission separates the charges by ballistic conduction and photovoltaic emission separates them by diffusion, but some "hot carrier" photovoltaic devices concepts blur this distinction.
The first demonstration of the photovoltaic effect, by Edmond Becquerel in 1839, used an electrochemical cell. He explained his discovery in Comptes rendus de l'Académie des sciences , "the production of an electric current when two plates of platinum or gold immersed in an acid, neutral, or alkaline solution are exposed in an uneven way to solar radiation." [ 2 ]
The first solar cell, consisting of a layer of selenium covered with a thin film of gold, was experimented by Charles Fritts in 1884, but it had a very poor efficiency. [ 3 ] However, the most familiar form of the photovoltaic effect uses solid-state devices, mainly in photodiodes . When sunlight or other sufficiently energetic light is incident upon the photodiode, the electrons present in the valence band absorb energy and, being excited, jump to the conduction band and become free. These excited electrons diffuse, and some reach the rectifying junction (usually a diode p–n junction ) where they are accelerated into the n-type semiconductor material by the built-in potential ( Galvani potential ). This generates an electromotive force and an electric current, and thus some of the light energy is converted into electric energy. The photovoltaic effect can also occur when two photons are absorbed simultaneously in a process called two-photon photovoltaic effect .
In addition to the direct photovoltaic excitation of free electrons, an electric current can also arise through the Seebeck effect . When a conductive or semiconductive material is heated by absorption of electromagnetic radiation, the heating can lead to increased temperature gradients in the semiconductor material or differentials between materials. These thermal differences in turn may generate a voltage because the electron energy levels are shifted differently in different areas, creating a potential difference between those areas which in turn create an electric current. The relative contributions of the photovoltaic effect versus the Seebeck effect depend on many characteristics of the constituent materials. [ citation needed ]
All above effects generate direct current, the first demonstration of the alternating current photovoltaic effect (AC PV) was done by Dr. Haiyang Zou and Prof. Zhong Lin Wang at the Georgia Institute of Technology in 2017. The AC PV effect is the generation of alternating current (AC) in the nonequilibrium states when the light periodically shines at the junction or interface of material. [ 5 ] The AC PV effect is based on the capacitive model that the current strongly depends on the frequency of the chopper. The AC PV effect is suggested to be a result of the relative shift and realignment between the quasi-Fermi levels of the semiconductors adjacent to the junction/interface under the nonequilibrium conditions. The electrons flow in the external circuit back and forth to balance the potential difference between two electrodes. The organic solar cell, which the materials have no initial carrier concentration, does not have the AC PV effect.
The performance of a photovoltaic module depends on the environmental conditions, mainly on the global incident irradiance G on the module plane. However, the temperature T of the p–n junction also influences the main electrical parameters: the short-circuit current ISC, the open-circuit voltage VOC, and the maximum power Pmax. The first studies about the behavior of PV cells under varying conditions of G and T date back several decades ago.1-4 In general, it is known that VOC shows a significant inverse correlation with T, whereas for ISC that correlation is direct, but weaker, so that this increment does not compensate for the decrease of VOC. As a consequence, Pmax reduces when T increases. This correlation between the output power of a solar cell and its junction working temperature depends on the semiconductor material,2 and it is due to the influence of T on the concentration, lifetime, and mobility of the intrinsic carriers, that is, electrons and holes, inside the PV cell.
The temperature sensitivity is usually described by some temperature coefficients, each one expressing the derivative of the parameter it refers to with respect to the junction temperature. The values of these parameters can be found in any PV module data sheet; they are the following:
– β Coefficient of variation of VOC with respect to T, given by ∂VOC/∂T.
– α Coefficient of variation of ISC with respect to T, given by ∂ISC/∂T.
– δ Coefficient of variation of Pmax with respect to T, given by ∂Pmax/∂T.
Techniques for estimating these coefficients from experimental data can be found in the literature. [ 6 ] Few studies analyse the variation of the series resistance with respect to the cell or module temperature. This dependency is studied by suitably processing the current–voltage curve. The temperature coefficient of the series resistance is estimated by using the single diode model or the double diode one. [ 7 ]
In most photovoltaic applications, the radiation source is sunlight, and the devices are called solar cells . In the case of a semiconductor p–n (diode) junction solar cell, illuminating the material creates an electric current because excited electrons and the remaining holes are swept in different directions by the built-in electric field of the depletion region. [ 8 ]
The AC PV is operated at the non-equilibrium conditions. The first study was based on a p-Si/TiO 2 nanofilm . It is found that except for the DC output generated by the conventional PV effect based on a p–n junction, AC current is also produced when a flashing light is illuminated at the interface. The AC PV effect does not follow Ohm's law, being based on the capacitive model that the current strongly depends on the frequency of the chopper, but voltage is independent of the frequency. The peak current of AC at high switching frequency can be much higher than that from DC. The magnitude of the output is also associated with the light absorption of materials. | https://en.wikipedia.org/wiki/Photovoltaic_effect |
Photovoltaics ( PV ) is the conversion of light into electricity using semiconducting materials that exhibit the photovoltaic effect , a phenomenon studied in physics , photochemistry , and electrochemistry . The photovoltaic effect is commercially used for electricity generation and as photosensors .
A photovoltaic system employs solar modules , each comprising a number of solar cells , which generate electrical power. PV installations may be ground-mounted, rooftop-mounted, wall-mounted or floating. The mount may be fixed or use a solar tracker to follow the sun across the sky.
Photovoltaic technology helps to mitigate climate change because it emits much less carbon dioxide than fossil fuels . Solar PV has specific advantages as an energy source: once installed, its operation does not generate any pollution or any greenhouse gas emissions ; it shows scalability in respect of power needs and silicon has large availability in the Earth's crust, although other materials required in PV system manufacture such as silver may constrain further growth in the technology. Other major constraints identified include competition for land use. [ 1 ] The use of PV as a main source requires energy storage systems or global distribution by high-voltage direct current power lines causing additional costs, and also has a number of other specific disadvantages such as variable power generation which have to be balanced. Production and installation does cause some pollution and greenhouse gas emissions , though only a fraction of the emissions caused by fossil fuels . [ 2 ]
Photovoltaic systems have long been used in specialized applications as stand-alone installations and grid-connected PV systems have been in use since the 1990s. [ 3 ] Photovoltaic modules were first mass-produced in 2000, when the German government funded a one hundred thousand roof program. [ 4 ] Decreasing costs has allowed PV to grow as an energy source. This has been partially driven by massive Chinese government investment in developing solar production capacity since 2000, and achieving economies of scale . Improvements in manufacturing technology and efficiency have also led to decreasing costs. [ 5 ] [ 6 ] Net metering and financial incentives, such as preferential feed-in tariffs for solar-generated electricity, have supported solar PV installations in many countries. [ 7 ] Panel prices dropped by a factor of 4 between 2004 and 2011. Module prices dropped by about 90% over the 2010s.
In 2022, worldwide installed PV capacity increased to more than 1 terawatt (TW) covering nearly two percent of global electricity demand . [ 8 ] After hydro and wind powers , PV is the third renewable energy source in terms of global capacity. In 2022, the International Energy Agency expected a growth by over 1 TW from 2022 to 2027. [ 9 ] In some instances, PV has offered the cheapest source of electrical power in regions with a high solar potential, with a bid for pricing as low as 0.015 US$/ kWh in Qatar in 2023. [ 10 ] In 2023, the International Energy Agency stated in its World Energy Outlook that '[f]or projects with low cost financing that tap high quality resources, solar PV is now the cheapest source of electricity in history. [ 11 ]
The term "photovoltaic" comes from the Greek φῶς ( phōs ) meaning "light", and from "volt", the unit of electromotive force, the volt , which in turn comes from the last name of the Italian physicist Alessandro Volta , inventor of the battery ( electrochemical cell ). The term "photovoltaic" has been in use in English since 1849. [ 12 ]
In 1989, the German Research Ministry initiated the first ever program to finance PV roofs (2200 roofs). A program led by Walter Sandtner in Bonn, Germany. [ 13 ]
In 1994, Japan followed in their footsteps and conducted a similar program with 539 residential PV systems installed. [ 14 ] Since, many countries have continued to produce and finance PV systems in an exponential speed.
Photovoltaics are best known as a method for generating electric power by using solar cells to convert energy from the sun into a flow of electrons by the photovoltaic effect . [ 15 ] [ 16 ]
Solar cells produce direct current electricity from sunlight which can be used to power equipment or to recharge batteries . The first practical application of photovoltaics was to power orbiting satellites and other spacecraft , but today the majority of photovoltaic modules are used for grid-connected systems for power generation. In this case an inverter is required to convert the DC to AC . There is also a smaller market for stand alone systems for remote dwellings, boats , recreational vehicles , electric cars , roadside emergency telephones, remote sensing , and cathodic protection of pipelines .
Photovoltaic power generation employs solar modules composed of a number of solar cells containing a semiconductor material. [ 17 ] Copper solar cables connect modules (module cable), arrays (array cable), and sub-fields. Because of the growing demand for renewable energy sources, the manufacturing of solar cells and photovoltaic arrays has advanced considerably in recent years. [ 18 ] [ 19 ] [ 20 ]
Cells require protection from the environment and are usually packaged tightly in solar modules.
Photovoltaic module power is measured under standard test conditions (STC) in "W p " ( watts peak ). [ 21 ] The actual power output at a particular place may be less than or greater than this rated value, depending on geographical location, time of day, weather conditions, and other factors. [ 22 ] Solar photovoltaic array capacity factors are typically under 25% when not coupled with storage, which is lower than many other industrial sources of electricity. [ 23 ]
Solar-cell efficiency is the portion of energy in the form of sunlight that can be converted via photovoltaics into electricity by the solar cell .
The efficiency of the solar cells used in a photovoltaic system , in combination with latitude and climate, determines the annual energy output of the system. For example, a solar panel with 20% efficiency and an area of 1 m 2 produces 200 kWh/yr at Standard Test Conditions if exposed to the Standard Test Condition solar irradiance value of 1000 W/m 2 for 2.74 hours a day. Usually solar panels are exposed to sunlight for longer than this in a given day, but the solar irradiance is less than 1000 W/m 2 for most of the day. A solar panel can produce more when the Sun is high in Earth's sky and produces less in cloudy conditions, or when the Sun is low in the sky. The Sun is lower in the sky in the winter.
Two location dependent factors that affect solar PV yield are the dispersion and intensity of solar radiation. These two variables can vary greatly between each country. [ 24 ] The global regions that have high radiation levels throughout the year are the middle east, Northern Chile, Australia, China, and Southwestern USA. [ 24 ] [ 25 ] In a high-yield solar area like central Colorado, which receives annual insolation of 2000 kWh/m 2 /year, [ 26 ] a panel can be expected to produce 400 kWh of energy per year. However, in Michigan, which receives only 1400 kWh/m 2 /year, [ 26 ] annual energy yield drops to 280 kWh for the same panel. At more northerly European latitudes, yields are significantly lower: 175 kWh annual energy yield in southern England under the same conditions. [ 27 ]
Several factors affect a cell's conversion efficiency, including its reflectance , thermodynamic efficiency , charge carrier separation efficiency, charge carrier collection efficiency and conduction efficiency values. [ 29 ] [ 28 ] Because these parameters can be difficult to measure directly, other parameters are measured instead, including quantum efficiency , open-circuit voltage (V OC ) ratio, and § Fill factor . Reflectance losses are accounted for by the quantum efficiency value, as they affect external quantum efficiency . Recombination losses are accounted for by the quantum efficiency, V OC ratio, and fill factor values. Resistive losses are predominantly accounted for by the fill factor value, but also contribute to the quantum efficiency and V OC ratio values.
Module performance is generally rated under standard test conditions (STC): irradiance of 1,000 W/m 2 , solar spectrum of AM 1.5 and module temperature at 25 °C. [ 35 ] The actual voltage and current output of the module changes as lighting, temperature and load conditions change, so there is never one specific voltage at which the module operates. Performance varies depending on geographic location, time of day, the day of the year, amount of solar irradiance , direction and tilt of modules, cloud cover, shading, soiling , state of charge, and temperature. Performance of a module or panel can be measured at different time intervals with a DC clamp meter or shunt and logged, graphed, or charted with a chart recorder or data logger.
For optimum performance, a solar panel needs to be made of similar modules oriented in the same direction perpendicular to direct sunlight. Bypass diodes are used to circumvent broken or shaded panels and optimize output. These bypass diodes are usually placed along groups of solar cells to create a continuous flow. [ 36 ]
Electrical characteristics include nominal power (P MAX , measured in W ), open-circuit voltage (V OC ), short-circuit current (I SC , measured in amperes ), maximum power voltage (V MPP ), maximum power current (I MPP ), peak power ( watt-peak , W p ), and module efficiency (%).
Open-circuit voltage or V OC is the maximum voltage the module can produce when not connected to an electrical circuit or system. [ 37 ] V OC can be measured with a voltmeter directly on an illuminated module's terminals or on its disconnected cable.
The peak power rating, W p , is the maximum output under standard test conditions (not the maximum possible output). Typical modules, which could measure approximately 1 by 2 metres (3 ft × 7 ft), will be rated from as low as 75 W to as high as 600 W, depending on their efficiency. At the time of testing, the test modules are binned according to their test results, and a typical manufacturer might rate their modules in 5 W increments, and either rate them at +/- 3%, +/-5%, +3/-0% or +5/-0%. [ 38 ] [ 39 ] [ 40 ]
The performance of a photovoltaic (PV) module depends on the environmental conditions, mainly on the global incident irradiance G in the plane of the module. However, the temperature T of the p–n junction also influences the main electrical parameters: the short circuit current ISC, the open circuit voltage VOC and the maximum power Pmax. In general, it is known that VOC shows a significant inverse correlation with T, while for ISC this correlation is direct, but weaker, so that this increase does not compensate for the decrease in VOC. As a consequence, Pmax decreases when T increases. This correlation between the power output of a solar cell and the working temperature of its junction depends on the semiconductor material, and is due to the influence of T on the concentration, lifetime, and mobility of the intrinsic carriers, i.e., electrons and gaps. inside the photovoltaic cell.
Temperature sensitivity is usually described by temperature coefficients, each of which expresses the derivative of the parameter to which it refers with respect to the junction temperature. The values of these parameters, which can be found in any data sheet of the photovoltaic module, are the following:
Techniques for estimating these coefficients from experimental data can be found in the literature. [ 41 ]
The ability of solar modules to withstand damage by rain, hail , heavy snow load, and cycles of heat and cold varies by manufacturer, although most solar panels on the U.S. market are UL listed, meaning they have gone through testing to withstand hail. [ 42 ]
Potential-induced degradation (also called PID) is a potential-induced performance degradation in crystalline photovoltaic modules, caused by so-called stray currents. [ 43 ] This effect may cause power loss of up to 30%. [ 44 ]
The largest challenge for photovoltaic technology is the purchase price per watt of electricity produced. Advancements in photovoltaic technologies have brought about the process of "doping" the silicon substrate to lower the activation energy thereby making the panel more efficient in converting photons to retrievable electrons. [ 45 ]
Chemicals such as boron (p-type) are applied into the semiconductor crystal in order to create donor and acceptor energy levels substantially closer to the valence and conductor bands. [ 46 ] In doing so, the addition of boron impurity allows the activation energy to decrease twenty-fold from 1.12 eV to 0.05 eV. Since the potential difference (E B ) is so low, the boron is able to thermally ionize at room temperatures. This allows for free energy carriers in the conduction and valence bands thereby allowing greater conversion of photons to electrons.
The power output of a photovoltaic (PV) device decreases over time. This decrease is due to its exposure to solar radiation as well as other external conditions. The degradation index, which is defined as the annual percentage of output power loss, is a key factor in determining the long-term production of a photovoltaic plant. To estimate this degradation, the percentage of decrease associated with each of the electrical parameters. The individual degradation of a photovoltaic module can significantly influence the performance of a complete string. Furthermore, not all modules in the same installation decrease their performance at exactly the same rate. Given a set of modules exposed to long-term outdoor conditions, the individual degradation of the main electrical parameters and the increase in their dispersion must be considered. As each module tends to degrade differently, the behavior of the modules will be increasingly different over time, negatively affecting the overall performance of the plant.
There are several studies dealing with the power degradation analysis of modules based on different photovoltaic technologies available in the literature. According to a recent study, [ 47 ] the degradation of crystalline silicon modules is very regular, oscillating between 0.8% and 1.0% per year.
On the other hand, if we analyze the performance of thin-film photovoltaic modules, an initial period of strong degradation is observed (which can last several months and up to two years), followed by a later stage in which the degradation stabilizes, being then comparable to that of crystalline silicon. [ 48 ] Strong seasonal variations are also observed in such thin-film technologies because the influence of the solar spectrum is much greater. For example, for modules of amorphous silicon, micromorphic silicon or cadmium telluride, we are talking about annual degradation rates for the first years of between 3% and 4%. [ 49 ] However, other technologies, such as CIGS, show much lower degradation rates, even in those early years.
Overall the manufacturing process of creating solar photovoltaics is simple in that it does not require the culmination of many complex or moving parts. Because of the solid-state nature of PV systems, they often have relatively long lifetimes, anywhere from 10 to 30 years. To increase the electrical output of a PV system, the manufacturer must simply add more photovoltaic components. Because of this, economies of scale are important for manufacturers as costs decrease with increasing output. [ 50 ]
While there are many types of PV systems known to be effective, crystalline silicon PV accounted for around 90% of the worldwide production of PV in 2013. Manufacturing silicon PV systems has several steps. First, polysilicon is processed from mined quartz until it is very pure (semi-conductor grade). This is melted down when small amounts of boron , a group III element, are added to make a p-type semiconductor rich in electron holes. Typically using a seed crystal, an ingot of this solution is grown from the liquid polycrystalline. The ingot may also be cast in a mold. Wafers of this semiconductor material are cut from the bulk material with wire saws, and then go through surface etching before being cleaned. Next, the wafers are placed into a phosphorus vapor deposition furnace which lays a very thin layer of phosphorus, a group V element, which creates an n-type semiconducting surface. To reduce energy losses, an anti-reflective coating is added to the surface, along with electrical contacts. After finishing the cell, cells are connected via electrical circuit according to the specific application and prepared for shipping and installation. [ 51 ]
Solar photovoltaic power is not entirely "clean energy": production produces greenhouse gas emissions, materials used to build the cells are potentially unsustainable and will run out eventually, [ clarification needed ] [ citation needed ] the technology uses toxic substances which cause pollution, [ citation needed ] and there are no viable technologies for recycling solar waste. [ 52 ] [ obsolete source ] Data required to investigate their impact are sometimes affected by a rather large amount of uncertainty. The values of human labor and water consumption, for example, are not precisely assessed due to the lack of systematic and accurate analyses in the scientific literature. [ 1 ] One difficulty in determining effects due to PV is to determine if the wastes are released to the air, water, or soil during the manufacturing phase. [ 53 ] [ obsolete source ] Life-cycle assessments , which look at all different environment effects ranging from global warming potential , pollution, water depletion and others, are unavailable for PV. Instead, studies have tried to estimate the impact and potential impact of various types of PV, but these estimates are usually restricted to simply assessing energy costs of the manufacture and/or transport , because these are new technologies and the total environmental impact of their components and disposal methods are unknown, even for commercially available first generation solar cells , let alone experimental prototypes with no commercial viability. [ 54 ] [ better source needed ]
Thus, estimates of the environmental impact of PV have focused on carbon dioxide equivalents per kWh or energy pay-back time (EPBT). [ citation needed ] The EPBT describes the timespan a PV system needs to operate in order to generate the same amount of energy that was used for its manufacture. [ 55 ] Another study includes transport energy costs in the EPBT. [ 56 ] The EPBT has also been defined completely differently as "the time needed to compensate for the total renewable- and non-renewable primary energy required during the life cycle of a PV system" in another study, which also included installation costs. [ 57 ] This energy amortization, given in years, is also referred to as break-even energy payback time . [ 58 ] The lower the EPBT, the lower the environmental cost of solar power . [ citation needed ] The EPBT depends vastly on the location where the PV system is installed (e.g. the amount of sunlight available and the efficiency of the electrical grid) [ 56 ] and on the type of system, namely the system's components. [ 55 ]
A 2015 review of EPBT estimates of first and second-generation PV suggested that there was greater variation in embedded energy than in efficiency of the cells implying that it was mainly the embedded energy that needs to reduce to have a greater reduction in EPBT. [ 59 ]
In general, the most important component of solar panels, which accounts for much of the energy use and greenhouse gas emissions, is the refining of the polysilicon. [ 55 ] As to how much percentage of the EPBT this silicon depends on the type of system. A fully autarkic system requires additional components ('Balance of System', the power inverters , storage, etc.) which significantly increase the energy cost of manufacture, but in a simple rooftop system, some 90% of the energy cost is from silicon, with the remainder coming from the inverters and module frame. [ 55 ]
The EPBT relates closely to the concepts of net energy gain (NEG) and energy returned on energy invested (EROI). They are both used in energy economics and refer to the difference between the energy expended to harvest an energy source and the amount of energy gained from that harvest. The NEG and EROI also take the operating lifetime of a PV system into account and a working life of 25 to 30 years is typically assumed. From these metrics, the Energy payback Time can be derived by calculation. [ 60 ] [ 61 ]
PV systems using crystalline silicon, by far the majority of the systems in practical use, have such a high EPBT because silicon is produced by the reduction of high-grade quartz sand in electric furnaces . This coke-fired smelting process occurs at high temperatures of more than 1000 °C and is very energy intensive, using about 11 kilowatt-hours (kWh) per produced kilogram of silicon. [ 62 ] The energy requirements of this process makes the energy cost per unit of silicon produced relatively inelastic, which means that the production process itself will not become more efficient in the future.
Nonetheless, the energy payback time has shortened significantly over the last years, as crystalline silicon cells became ever more efficient in converting sunlight, while the thickness of the wafer material was constantly reduced and therefore required less silicon for its manufacture. Within the last ten years, the amount of silicon used for solar cells declined from 16 to 6 grams per watt-peak . In the same period, the thickness of a c-Si wafer was reduced from 300 μm, or microns , to about 160–190 μm. The sawing techniques that slice crystalline silicon ingots into wafers have also improved by reducing the kerf loss and making it easier to recycle the silicon sawdust. [ 63 ] [ 64 ]
Crystalline silicon modules are the most extensively studied PV type in terms of LCA since they are the most commonly used. Mono-crystalline silicon photovoltaic systems (mono-si) have an average efficiency of 14.0%. [ 66 ] The cells tend to follow a structure of front electrode, anti-reflection film, n-layer, p-layer, and back electrode, with the sun hitting the front electrode. EPBT ranges from 1.7 to 2.7 years. [ 67 ] The cradle to gate of CO 2 -eq/kWh ranges from 37.3 to 72.2 grams when installed in Southern Europe. [ 68 ]
Techniques to produce multi-crystalline silicon (multi-si) photovoltaic cells are simpler and cheaper than mono-si, however tend to make less efficient cells, an average of 13.2%. [ 66 ] EPBT ranges from 1.5 to 2.6 years. [ 67 ] The cradle to gate of CO 2 -eq/kWh ranges from 28.5 to 69 grams when installed in Southern Europe. [ 68 ]
Assuming that the following countries had a high-quality grid infrastructure as in Europe, in 2020 it was calculated it would take 1.28 years in Ottawa , Canada, for a rooftop photovoltaic system to produce the same amount of energy as required to manufacture the silicon in the modules in it (excluding the silver, glass, mounts and other components), 0.97 years in Catania , Italy , and 0.4 years in Jaipur , India. Outside of Europe, where net grid efficiencies are lower, it would take longer. This ' energy payback time ' can be seen as the portion of time during the useful lifetime of the module in which the energy production is polluting. At best, this means that a 30-year old panel has produced clean energy for 97% of its lifetime, or that the silicon in the modules in a solar panel produce 97% less greenhouse gas emissions than a coal-fired plant for the same amount of energy (assuming and ignoring many things). [ 56 ] Some studies have looked beyond EPBT and GWP to other environmental effects. In one such study, conventional energy mix in Greece was compared to multi-si PV and found a 95% overall reduction in effects including carcinogens, eco-toxicity, acidification, eutrophication, and eleven others. [ 69 ]
Cadmium telluride (CdTe) is one of the fastest-growing thin film based solar cells which are collectively known as second-generation devices. This new thin-film device also shares similar performance restrictions ( Shockley-Queisser efficiency limit ) as conventional Si devices but promises to lower the cost of each device by both reducing material and energy consumption during manufacturing. The global market share of CdTe was 4.7% in 2008. [ 53 ] This technology's highest power conversion efficiency is 21%. [ 70 ] The cell structure includes glass substrate (around 2 mm), transparent conductor layer, CdS buffer layer (50–150 nm), CdTe absorber and a metal contact layer.
CdTe PV systems require less energy input in their production than other commercial PV systems per unit electricity production. The average CO 2 -eq/kWh is around 18 grams (cradle to gate). CdTe has the fastest EPBT of all commercial PV technologies, which varies between 0.3 and 1.2 years. [ 71 ]
Third-generation PVs are designed to combine the advantages of both the first and second generation devices and they do not have Shockley-Queisser limit , a theoretical limit for first and second generation PV cells. The thickness of a third generation device is less than 1 μm. [ 72 ]
Two new promising thin film technologies are copper zinc tin sulfide (Cu 2 ZnSnS 4 or CZTS), [ 54 ] zinc phosphide (Zn 3 P 2 ) [ 54 ] and single-walled carbon nano-tubes (SWCNT). [ 73 ] These thin films are currently only produced in the lab but may be commercialized in the future. The manufacturing of CZTS and (Zn 3 P 2 ) processes are expected to be similar to those of current thin film technologies of CIGS and CdTe, respectively. While the absorber layer of SWCNT PV is expected to be synthesized with CoMoCAT method. [ 74 ] by Contrary to established thin films such as CIGS and CdTe, CZTS, Zn 3 P 2 , and SWCNT PVs are made from earth abundant, nontoxic materials and have the potential to produce more electricity annually than the current worldwide consumption. [ 75 ] [ 76 ] While CZTS and Zn 3 P 2 offer good promise for these reasons, the specific environmental implications of their commercial production are not yet known. Global warming potential of CZTS and Zn 3 P 2 were found 38 and 30 grams CO 2 -eq/kWh while their corresponding EPBT were found 1.85 and 0.78 years, respectively. [ 54 ] Overall, CdTe and Zn 3 P 2 have similar environmental effects but can slightly outperform CIGS and CZTS. [ 54 ] A study on environmental impacts of SWCNT PVs by Celik et al., including an existing 1% efficient device and a theoretical 28% efficient device, found that, compared to monocrystalline Si, the environmental impacts from 1% SWCNT was ~18 times higher due mainly to the short lifetime of three years. [ 73 ]
Source: Apricus [ 77 ]
There have been major changes in the underlying costs, industry structure and market prices of solar photovoltaics technology, over the years, and gaining a coherent picture of the shifts occurring across the industry value chain globally is a challenge. This is due to: "the rapidity of cost and price changes, the complexity of the PV supply chain, which involves a large number of manufacturing processes, the balance of system (BOS) and installation costs associated with complete PV systems, the choice of different distribution channels, and differences between regional markets within which PV is being deployed". Further complexities result from the many different policy support initiatives that have been put in place to facilitate photovoltaics commercialisation in various countries. [ 3 ]
Renewable energy technologies have generally gotten cheaper since their invention. [ 78 ] [ 79 ] [ 80 ] Renewable energy systems have become cheaper to build than fossil fuel power plants across much of the world, thanks to advances in wind and solar energy technology, in particular. [ 81 ]
There is no silver bullet in electricity or energy demand and bill management, because customers (sites) have different specific situations, e.g. different comfort/convenience needs, different electricity tariffs, or different usage patterns. Electricity tariff may have a few elements, such as daily access and metering charge, energy charge (based on kWh, MWh) or peak demand charge (e.g. a price for the highest 30min energy consumption in a month). PV is a promising option for reducing energy charges when electricity prices are reasonably high and continuously increasing, such as in Australia and Germany. However, for sites with peak demand charge in place, PV may be less attractive if peak demands mostly occur in the late afternoon to early evening, for example in residential communities. Overall, energy investment is largely an economic decision and it is better to make investment decisions based on systematic evaluation of options in operational improvement, energy efficiency, onsite generation and energy storage. [ 82 ] [ 83 ]
In 1977 crystalline silicon solar cell prices were at $76.67/W. [ 85 ]
Although wholesale module prices remained flat at around $3.50 to $4.00/W in the early 2000s due to high demand in Germany and Spain afforded by generous subsidies and shortage of polysilicon, demand crashed with the abrupt ending of Spanish subsidies after the market crash of 2008, and the price dropped rapidly to $2.00/W. Manufacturers were able to maintain a positive operating margin despite a 50% drop in income due to innovation and reductions in costs. In late 2011, factory-gate prices for crystalline-silicon photovoltaic modules suddenly dropped below the $1.00/W mark, taking many in the industry by surprise, and has caused a number of solar manufacturing companies to go bankrupt throughout the world. The $1.00/W cost is often regarded in the PV industry as marking the achievement of grid parity for PV, but most experts do not believe this price point is sustainable. Technological advancements, manufacturing process improvements, and industry re-structuring, may mean that further price reductions are possible. [ 3 ] The average retail price of solar cells as monitored by the Solarbuzz group fell from $3.50/watt to $2.43/watt over the course of 2011. [ 86 ] In 2013 wholesale prices had fallen to $0.74/W. [ 85 ] This has been cited as evidence supporting ' Swanson's law ', an observation similar to the famous Moore's Law , which claims that solar cell prices fall 20% for every doubling of industry capacity. [ 85 ] The Fraunhofer Institute defines the 'learning rate' as the drop in prices as the cumulative production doubles, some 25% between 1980 and 2010. Although the prices for modules have dropped quickly, current inverter prices have dropped at a much lower rate, and in 2019 constitute over 61% of the cost per kWp, from a quarter in the early 2000s. [ 56 ]
Note that the prices mentioned above are for bare modules, another way of looking at module prices is to include installation costs. In the US, according to the Solar Energy Industries Association, the price of installed rooftop PV modules for homeowners fell from $9.00/W in 2006 to $5.46/W in 2011. Including the prices paid by industrial installations, the national installed price drops to $3.45/W. This is markedly higher than elsewhere in the world, in Germany homeowner rooftop installations averaged at $2.24/W. The cost differences are thought to be primarily based on the higher regulatory burden and lack of a national solar policy in the US. [ 87 ]
By the end of 2012 Chinese manufacturers had production costs of $0.50/W in the cheapest modules. [ 88 ] In some markets distributors of these modules can earn a considerable margin, buying at factory-gate price and selling at the highest price the market can support ('value-based pricing'). [ 3 ] In California PV reached grid parity in 2011, which is usually defined as PV production costs at or below retail electricity prices (though often still above the power station prices for coal or gas-fired generation without their distribution and other costs). [ 89 ] Grid parity had been reached in 19 markets in 2014. [ 90 ] [ 91 ]
By 2024, massive increases of production of solar panels in China had caused module prices to drop to as low as $0.11/W, an over 90 percent reduction from 2011 prices. [ 92 ]
The levelised cost of electricity (LCOE) is the cost per kWh based on the costs distributed over the project lifetime, and is thought to be a better metric for calculating viability than price per wattage. LCOEs vary dramatically depending on the location. [ 3 ] The LCOE can be considered the minimum price customers will have to pay the utility company in order for it to break even on the investment in a new power station. [ 5 ] Grid parity is roughly achieved when the LCOE falls to a similar price as conventional local grid prices, although in actuality the calculations are not directly comparable. [ 93 ] Large industrial PV installations had reached grid parity in California in 2011. [ 80 ] [ 93 ] Grid parity for rooftop systems was still believed to be much farther away at this time. [ 93 ] Many LCOE calculations are not thought to be accurate, and a large amount of assumptions are required. [ 3 ] [ 93 ] Module prices may drop further, and the LCOE for solar may correspondingly drop in the future. [ 94 ]
Because energy demands rise and fall over the course of the day, and solar power is limited by the fact that the sun sets, solar power companies must also factor in the additional costs of supplying a more stable alternative energy supplies to the grid in order to stabilize the system, or storing the energy. These costs are not factored into LCOE calculations, nor are special subsidies or premiums that may make buying solar power more attractive. [ 5 ] The unreliability and temporal variation in generation of solar and wind power is a major problem. Too much of these volatile power sources can cause instability of the entire grid. [ 95 ]
As of 2017 power-purchase agreement prices for solar farms below $0.05/kWh are common in the United States, and the lowest bids in some Persian Gulf countries were about $0.03/kWh. [ 96 ] The goal of the United States Department of Energy is to achieve a levelised cost of energy for solar PV of $0.03/kWh for utility companies. [ 97 ]
Financial incentives for photovoltaics , such as feed-in tariffs (FITs), were often offered to electricity consumers to install and operate solar-electric generating systems, and in some countries such subsidies are the only way photovoltaics can remain economically profitable. [ 98 ] [ obsolete source ] PV FITs were crucial for early growth of photovoltaics. Germany and Spain were the most important countries offering subsidies for PV, and the policies of these countries drove demand. [ 3 ]
Some US solar cell manufacturing companies have repeatedly complained that the dropping prices of PV module costs have been achieved due to subsidies by the government of China, and the dumping of these products below fair market prices. US manufacturers generally recommend high tariffs on foreign supplies to allow them remain profitable. In response to these concerns, the Obama administration began to levy tariffs on US consumers of these products in 2012 to raise prices for domestic manufacturers. [ 3 ] The USA, however, also subsidies the industry. [ 87 ]
Some environmentalists have promoted the idea that government incentives should be used in order to expand the PV manufacturing industry to reduce costs of PV-generated electricity much more rapidly to a level where it is able to compete with fossil fuels in a free market. [ citation needed ] This is based on the theory that when the manufacturing capacity doubles, economies of scale will cause the prices of the solar products to halve. [ 5 ]
In many countries access to capital is lacking to develop PV projects. [ 99 ] To solve this problem, securitization is sometimes used to accelerate development of solar photovoltaic projects. [ 89 ] [ 100 ] [ 101 ]
Photovoltaic power is also generated during a time of day that is close to peak demand (precedes it) in electricity systems with high use of air conditioning. Since large-scale PV operation requires back-up in the form of spinning reserves, its [ clarification needed ] marginal cost of generation in the middle of the day is typically lowest, but not zero, when PV is generating electricity. This can be seen in Figure 1 of this paper:. [ 102 ] For residential properties with private PV facilities networked to the grid, the owner may be able earn extra money when the time of generation is included, as electricity is worth more during the day than at night. [ 103 ]
One journalist theorised in 2012 that if the energy bills of Americans were forced upwards by imposing an extra tax of $50/ton on carbon dioxide emissions from coal-fired power, this could have allowed solar PV to appear more cost-competitive to consumers in most locations. [ 86 ]
Solar photovoltaics formed the largest body of research among the seven sustainable energy types examined in a global bibliometric study, with the annual scientific output growing from 9,094 publications in 2011 to 14,447 publications in 2019. [ 104 ]
Likewise, the application of solar photovoltaics is growing rapidly and the worldwide installed capacity reached one terawatt in April 2022. [ 105 ] The total power output of the world's PV capacity in a calendar year is now beyond 500 TWh of electricity. This represents 2% of worldwide electricity demand. More than 100 countries , such as Brazil and India , use solar PV. [ 106 ] [ 107 ] China is followed by the United States and Japan , while installations in Germany , once the world's largest producer, have been slowing down.
Honduras generated the highest percentage of its energy from solar in 2019, 14.8%. [ 108 ] As of 2019, Vietnam has the highest installed capacity in Southeast Asia, about 4.5 GW. [ 109 ] The annualized installation rate of about 90 W per capita per annum places Vietnam among world leaders. [ 109 ] Generous Feed-in tariff (FIT) and government supporting policies such as tax exemptions were the key to enable Vietnam's solar PV boom. Underlying drivers include the government's desire to enhance energy self-sufficiency and the public's demand for local environmental quality. [ 109 ]
A key barrier is limited transmission grid capacity. [ 109 ]
China has the world's largest solar power capacity, with 390 GW of installed capacity in 2022 compared with about 200 GW in the European Union, according to International Energy Agency data. [ 110 ] Other countries with the world's largest solar power capacities include the United States, Japan and Germany.
Data: IEA-PVPS Snapshot of Global PV Markets 2023 report, April 2023 [ 111 ] Also see Solar power by country for a complete and continuously updated list
In 2017, it was thought probable that by 2030 global PV installed capacities could be between 3,000 and 10,000 GW. [ 96 ] Greenpeace in 2010 claimed that 1,845 GW of PV systems worldwide could be generating approximately 2,646 TWh/year of electricity by 2030, and by 2050 over 20% of all electricity could be provided by PV. [ 112 ]
There are many practical applications for the use of solar panels or photovoltaics covering every technological domain under the sun. From the fields of the agricultural industry as a power source for irrigation to its usage in remote health care facilities to refrigerate medical supplies. Other applications include power generation at various scales and attempts to integrate them into homes and public infrastructure. PV modules are used in photovoltaic systems and include a large variety of electrical devices.
A photovoltaic system, or solar PV system is a power system designed to supply usable solar power by means of photovoltaics. It consists of an arrangement of several components, including solar panels to absorb and directly convert sunlight into electricity, a solar inverter to change the electric current from DC to AC, as well as mounting, cabling and other electrical accessories. PV systems range from small, roof-top mounted or building-integrated systems with capacities from a few to several tens of kilowatts , to large utility-scale power stations of hundreds of megawatts . Nowadays, most PV systems are grid-connected , while stand-alone systems only account for a small portion of the market.
Photosensors are sensors of light or other electromagnetic radiation . [ 113 ] A photo detector has a p–n junction that converts light photons into current. The absorbed photons make electron–hole pairs in the depletion region . Photodiodes and photo transistors are a few examples of photo detectors. Solar cells convert some of the light energy absorbed into electrical energy.
Crystalline silicon photovoltaics are only one type of PV, and while they represent the majority of solar cells produced currently there are many new and promising technologies that have the potential to be scaled up to meet future energy needs. As of 2018, crystalline silicon cell technology serves as the basis for several PV module types, including monocrystalline, multicrystalline, mono PERC, and bifacial. [ 114 ]
Another newer technology, thin-film PV, are manufactured by depositing semiconducting layers of perovskite , a mineral with semiconductor properties, on a substrate in vacuum. The substrate is often glass or stainless-steel, and these semiconducting layers are made of many types of materials including cadmium telluride (CdTe), copper indium diselenide (CIS), copper indium gallium diselenide (CIGS), and amorphous silicon (a-Si). After being deposited onto the substrate the semiconducting layers are separated and connected by electrical circuit by laser scribing. [ 115 ] [ 116 ] Perovskite solar cells are a very efficient solar energy converter and have excellent optoelectronic properties for photovoltaic purposes, but their upscaling from lab-sized cells to large-area modules is still under research. [ 117 ] Thin-film photovoltaic materials may possibly become attractive in the future, because of the reduced materials requirements and cost to manufacture modules consisting of thin-films as compared to silicon-based wafers. [ 118 ] In 2019 university labs at Oxford, Stanford and elsewhere reported perovskite solar cells with efficiencies of 20-25%. [ 119 ]
Copper indium gallium selenide (CIGS) is a thin film solar cell based on the copper indium diselenide (CIS) family of chalcopyrite semiconductors . CIS and CIGS are often used interchangeably within the CIS/CIGS community. The cell structure includes soda lime glass as the substrate, Mo layer as the back contact, CIS/CIGS as the absorber layer, cadmium sulfide (CdS) or Zn (S,OH)x as the buffer layer, and ZnO:Al as the front contact. [ 120 ] CIGS is approximately 1/100 the thickness of conventional silicon solar cell technologies. Materials necessary for assembly are readily available, and are less costly per watt of solar cell. CIGS based solar devices resist performance degradation over time and are highly stable in the field.
Reported global warming potential impacts of CIGS ranges 20.5–58.8 grams CO 2 -eq/kWh of electricity generated for different solar irradiation (1,700 to 2,200 kWh/m 2 /y) and power conversion efficiency (7.8 – 9.12%). [ 121 ] EPBT ranges from 0.2 to 1.4 years, [ 71 ] while harmonized value of EPBT was found 1.393 years. [ 59 ] Toxicity is an issue within the buffer layer of CIGS modules because it contains cadmium and gallium. [ 54 ] [ 122 ] CIS modules do not contain any heavy metals.
A perovskite solar cell (PSC) is a type of solar cell that includes a perovskite-structured compound, most commonly a hybrid organic–inorganic lead or tin halide-based material as the light-harvesting active layer. [ 123 ] [ 124 ] Perovskite materials, such as methylammonium lead halides and all-inorganic cesium lead halide, are cheap to produce and simple to manufacture.
Dye-sensitized solar cells (DSCs) are a novel thin film solar cell. These solar cells operate under ambient light better than other photovoltaic technologies. They work with light being absorbed in a sensitizing dye between two charge transport materials. Dye surrounds TiO 2 nanoparticles which are in a sintered network. [ 130 ] TiO 2 acts as conduction band in an n-type semiconductor; the scaffold for adorned dye molecules and transports elections during excitation. For TiO 2 DSC technology, sample preparation at high temperatures is very effective because higher temperatures produce more suitable textural properties. Another example of DSCs is the copper complex with Cu (II/I) as a redox shuttle with TMBY (4,4',6,6'-tetramethyl-2,2'bipyridine). DSCs show great performance with artificial and indoor light. From a range of 200 lux to 2,000 lux, these cells operate at conditions of a maximum efficiency of 29.7%. [ 131 ]
However, there have been issues with DSCs, many of which come from the liquid electrolyte. The solvent is hazardous, and will permeate most plastics. Because it is liquid, it is unstable to temperature variation, leading to freezing in cold temperatures and expansion in warm temperatures causing failure. [ 132 ] Another disadvantage is that the solar cell is not ideal for large scale application because of its low efficiency. Some of the benefits for DSC is that it can be used in a variety of light levels (including cloudy conditions), it has a low production cost, and it does not degrade under sunlight, giving it a longer lifetime then other types of thin film solar cells.
Other possible future PV technologies include organic, dye-sensitized and quantum-dot photovoltaics. [ 133 ] Organic photovoltaics (OPVs) fall into the thin-film category of manufacturing, and typically operate around the 12% efficiency range which is lower than the 12–21% typically seen by silicon-based PVs. Because organic photovoltaics require very high purity and are relatively reactive they must be encapsulated which vastly increases the cost of manufacturing and means that they are not feasible for large scale-up. Dye-sensitized PVs are similar in efficiency to OPVs but are significantly easier to manufacture. However, these dye-sensitized photovoltaics present storage problems because the liquid electrolyte is toxic and can potentially permeate the plastics used in the cell. Quantum dot solar cells are solution-processed, meaning they are potentially scalable, but currently they peak at 12% efficiency. [ 117 ]
Organic and polymer photovoltaic (OPV) are a relatively new area of research. The tradition OPV cell structure layers consist of a semi-transparent electrode, electron blocking layer, tunnel junction, holes blocking layer, electrode, with the sun hitting the transparent electrode. OPV replaces silver with carbon as an electrode material lowering manufacturing cost and making them more environmentally friendly. [ 134 ] OPV are flexible, low weight, and work well with roll-to roll manufacturing for mass production. [ 135 ] OPV uses "only abundant elements coupled to an extremely low embodied energy through very low processing temperatures using only ambient processing conditions on simple printing equipment enabling energy pay-back times". [ 136 ] Current efficiencies range 1–6.5%, [ 57 ] [ 137 ] however theoretical analyses show promise beyond 10% efficiency. [ 136 ]
Many different configurations of OPV exist using different materials for each layer. OPV technology rivals existing PV technologies in terms of EPBT even if they currently present a shorter operational lifetime. A 2013 study analyzed 12 different configurations all with 2% efficiency, the EPBT ranged from 0.29 to 0.52 years for 1 m 2 of PV. [ 138 ] The average CO 2 -eq/kWh for OPV is 54.922 grams. [ 139 ]
Thermophotovoltaic (TPV) energy conversion is a direct conversion process from heat to electricity via photons . A basic thermophotovoltaic system consists of a hot object emitting thermal radiation and a photovoltaic cell similar to a solar cell but tuned to the spectrum being emitted from the hot object. [ 140 ]
Photovoltaic-Thermoelectric Generator (PV-TEG) hybrid system is a type of hybrid PV cell that pairs a photovoltaic (PV) cell with a thermoelectric generator (TEG). [ 141 ] TEGs rely on the Seebeck effect , a phenomenon that occurs when a junction of two conducting materials experience a temperature difference thereby, inducing an electromotive force. [ 142 ] The resulting voltage is directly proportional to the temperature difference.
During the process of converting light into electricity, heat dissipates, making PV cells less efficient at high temperatures and reducing their lifespan. [ 142 ] By integrating a TEG into the system, heat is facilitated away from the PV cell and converts it into electricity, thereby improving its efficiency and longevity. [ 143 ]
The thermoelectric figure of merit ZT , determines the efficiency of converting heat into electricity as well as the ability to cool. [ 144 ] Optimizing parameters such as electrical conductivity (σ), Seebeck coefficient (S), thermal conductivity (κ) are of interest to maximize efficiencies.
Z T = σ S 2 T κ {\displaystyle ZT={\sigma S^{2}T \over \kappa }}
Common thermoelectric materials typically have a ZT value of about 1, corresponding to an efficiency of approximately 10% or less. [ 144 ] While typical TEGs have a low conversion efficiency, ongoing research in thermoelectric materials such as BiTe (ZT = 2.4) , SnSe (ZT = 2.6), and half-Heusler compounds (ZT = 1.6) have led to improvement in efficiency over the years. [ 144 ] [ 143 ] Theoretical predictions indicate greater potential for optimization, with estimated values of 14 for BiTe, 2.6 for SnSe, and 2.2 for half-Heusler. [ 144 ]
A number of solar modules may also be mounted vertically above each other in a tower, if the zenith distance of the Sun is greater than zero, and the tower can be turned horizontally as a whole and each module additionally around a horizontal axis. In such a tower the modules can follow the Sun exactly. Such a device may be described as a ladder mounted on a turnable disk. Each step of that ladder is the middle axis of a rectangular solar panel. In case the zenith distance of the Sun reaches zero, the "ladder" may be rotated to the north or the south to avoid a solar module producing a shadow on a lower one. Instead of an exactly vertical tower one can choose a tower with an axis directed to the polar star , meaning that it is parallel to the rotation axis of the Earth . In this case the angle between the axis and the Sun is always larger than 66 degrees. During a day it is only necessary to turn the panels around this axis to follow the Sun. Installations may be ground-mounted (and sometimes integrated with farming and grazing) [ 145 ] or built into the roof or walls of a building ( building-integrated photovoltaics ).
Where land may be limited, PV can be deployed as floating solar . In 2008 the Far Niente Winery pioneered the world's first "floatovoltaic" system by installing 994 photovoltaic solar panels onto 130 pontoons and floating them on the winery's irrigation pond. [ 146 ] [ 147 ] A benefit of the set up is that the panels are kept at a lower temperature than they would be on land, leading to a higher efficiency of solar energy conversion. The floating panels also reduce the amount of water lost through evaporation and inhibit the growth of algae. [ 148 ]
Concentrator photovoltaics is a technology that contrary to conventional flat-plate PV systems uses lenses and curved mirrors to focus sunlight onto small, but highly efficient, multi-junction solar cells. These systems sometimes use solar trackers and a cooling system to increase their efficiency.
In 2019, the world record for solar cell efficiency at 47.1% was achieved by using multi-junction concentrator solar cells, developed at National Renewable Energy Laboratory, Colorado, US. [ 149 ] The highest efficiencies achieved without concentration include a material by Sharp Corporation at 35.8% using a proprietary triple-junction manufacturing technology in 2009, [ 150 ] and Boeing Spectrolab (40.7% also using a triple-layer design).
There is an ongoing effort to increase the conversion efficiency of PV cells and modules, primarily for competitive advantage. In order to increase the efficiency of solar cells, it is important to choose a semiconductor material with an appropriate band gap that matches the solar spectrum. This will enhance the electrical and optical properties. Improving the method of charge collection is also useful for increasing the efficiency. There are several groups of materials that are being developed. Ultrahigh-efficiency devices (η>30%) [ 151 ] are made by using GaAs and GaInP2 semiconductors with multijunction tandem cells. High-quality, single-crystal silicon materials are used to achieve high-efficiency, low cost cells (η>20%).
Recent developments in organic photovoltaic cells (OPVs) have made significant advancements in power conversion efficiency from 3% to over 15% since their introduction in the 1980s. [ 152 ] To date, the highest reported power conversion efficiency ranges 6.7–8.94% for small molecule, 8.4–10.6% for polymer OPVs, and 7–21% for perovskite OPVs. [ 153 ] [ 154 ] OPVs are expected to play a major role in the PV market. Recent improvements have increased the efficiency and lowered cost, while remaining environmentally-benign and renewable.
Several companies have begun embedding power optimizers into PV modules called smart modules . These modules perform maximum power point tracking (MPPT) for each module individually, measure performance data for monitoring, and provide additional safety features. Such modules can also compensate for shading effects, wherein a shadow falling across a section of a module causes the electrical output of one or more strings of cells in the module to decrease. [ 155 ]
One of the major causes for the decreased performance of cells is overheating. The efficiency of a solar cell declines by about 0.5% for every 1 degree Celsius increase in temperature. This means that a 100 degree increase in surface temperature could decrease the efficiency of a solar cell by about half. Self-cooling solar cells are one solution to this problem. Rather than using energy to cool the surface, pyramid and cone shapes can be formed from silica , and attached to the surface of a solar panel. Doing so allows visible light to reach the solar cells , but reflects infrared rays (which carry heat). [ 156 ]
The 122 PW of sunlight reaching the Earth's surface is plentiful—almost 10,000 times more than the 13 TW equivalent of average power consumed in 2005 by humans. [ 157 ] This abundance leads to the suggestion that it will not be long before solar energy will become the world's primary energy source. [ 158 ] Additionally, solar radiation has the highest power density (global mean of 170 W/m 2 ) among renewable energies. [ 157 ] [ citation needed ]
Solar power is pollution-free during use, which enables it to cut down on pollution when it is substituted for other energy sources. For example, MIT estimated that 52,000 people per year die prematurely in the U.S. from coal-fired power plant pollution [ 159 ] and all but one of these deaths could be prevented from using PV to replace coal. [ 160 ] [ 161 ] Production end-wastes and emissions are manageable using existing pollution controls. End-of-use recycling technologies are under development [ 162 ] and policies are being produced that encourage recycling from producers. [ 163 ]
Solar panels are usually guaranteed for 25 years (but inverters tend to fail sooner), [ 164 ] [ 165 ] with little maintenance or intervention after their initial set-up, so after the initial capital cost of building any solar power plant, operating costs are extremely low compared to existing power technologies.
Rooftop solar can be used locally, thus reducing transmission/distribution losses. [ 166 ]
Compared to fossil and nuclear energy sources, very little research money has been invested in the development of solar cells, so there is considerable room for improvement. Nevertheless, experimental high efficiency solar cells already have efficiencies of over 40% in case of concentrating photovoltaic cells [ 167 ] and efficiencies are rapidly rising while mass-production costs are rapidly falling. [ 168 ]
In some states of the United States, much of the investment in a home-mounted system may be lost if the homeowner moves and the buyer puts less value on the system than the seller. The city of Berkeley developed an innovative financing method to remove this limitation, by adding a tax assessment that is transferred with the home to pay for the solar panels. [ 169 ] Now known as PACE , Property Assessed Clean Energy, 30 U.S. states have duplicated this solution. [ 170 ]
For behind-the-meter rooftop photovoltaic systems, the energy flow becomes two-way. When there is more local generation than consumption, electricity is exported to the grid, allowing for net metering . However, electricity networks traditionally are not designed to deal with two-way energy transfer, which may introduce technical issues. An over-voltage issue may come out as the electricity flows from these PV households back to the network. [ 171 ] There are solutions to manage the over-voltage issue, such as regulating PV inverter power factor, new voltage and energy control equipment at electricity distributor level, re-conductor the electricity wires, demand side management, etc. There are often limitations and costs related to these solutions.
High generation during the middle of the day reduces the net generation demand, but higher peak net demand as the sun goes down can require rapid ramping of utility generating stations, producing a load profile called the duck curve . | https://en.wikipedia.org/wiki/Photovoltaics |
In topology, the Phragmén–Brouwer theorem , introduced by Lars Edvard Phragmén and Luitzen Egbertus Jan Brouwer , states that if X is a normal connected locally connected topological space , then the following two properties are equivalent:
The theorem remains true with the weaker condition that A and B be separated . | https://en.wikipedia.org/wiki/Phragmen–Brouwer_theorem |
In complex analysis , the Phragmén–Lindelöf principle (or method ), first formulated by Lars Edvard Phragmén (1863–1937) and Ernst Leonard Lindelöf (1870–1946) in 1908, is a technique which employs an auxiliary, parameterized function to prove the boundedness of a holomorphic function f {\displaystyle f} (i.e, | f ( z ) | < M ( z ∈ Ω ) {\displaystyle |f(z)|<M\ \ (z\in \Omega )} ) on an unbounded domain Ω {\displaystyle \Omega } when an additional (usually mild) condition constraining the growth of | f | {\displaystyle |f|} on Ω {\displaystyle \Omega } is given. It is a generalization of the maximum modulus principle , which is only applicable to bounded domains.
In the theory of complex functions, it is known that the modulus (absolute value) of a holomorphic (complex differentiable) function in the interior of a bounded region is bounded by its modulus on the boundary of the region. More precisely, if a non-constant function f : C → C {\displaystyle f:\mathbb {C} \to \mathbb {C} } is holomorphic in a bounded region [ 1 ] Ω {\displaystyle \Omega } and continuous on its closure Ω ¯ = Ω ∪ ∂ Ω {\displaystyle {\overline {\Omega }}=\Omega \cup \partial \Omega } , then | f ( z 0 ) | < sup z ∈ ∂ Ω | f ( z ) | {\textstyle |f(z_{0})|<\sup _{z\in \partial \Omega }|f(z)|} for all z 0 ∈ Ω {\displaystyle z_{0}\in \Omega } . This is known as the maximum modulus principle. (In fact, since Ω ¯ {\displaystyle {\overline {\Omega }}} is compact and | f | {\displaystyle |f|} is continuous, there actually exists some w 0 ∈ ∂ Ω {\displaystyle w_{0}\in \partial \Omega } such that | f ( w 0 ) | = sup z ∈ Ω | f ( z ) | {\textstyle |f(w_{0})|=\sup _{z\in \Omega }|f(z)|} .) The maximum modulus principle is generally used to conclude that a holomorphic function is bounded in a region after showing that it is bounded on its boundary.
However, the maximum modulus principle cannot be applied to an unbounded region of the complex plane. As a concrete example, let us examine the behavior of the holomorphic function f ( z ) = exp ( exp ( z ) ) {\displaystyle f(z)=\exp(\exp(z))} in the unbounded strip
Although | f ( x ± π i / 2 ) | = 1 {\displaystyle |f(x\pm \pi i/2)|=1} , so that | f | {\displaystyle |f|} is bounded on boundary ∂ S {\displaystyle \partial S} , | f | {\displaystyle |f|} grows rapidly without bound when | z | → ∞ {\displaystyle |z|\to \infty } along the positive real axis. The difficulty here stems from the extremely fast growth of | f | {\displaystyle |f|} along the positive real axis. If the growth rate of | f | {\displaystyle |f|} is guaranteed to not be "too fast," as specified by an appropriate growth condition, the Phragmén–Lindelöf principle can be applied to show that boundedness of f {\displaystyle f} on the region's boundary implies that f {\displaystyle f} is in fact bounded in the whole region, effectively extending the maximum modulus principle to unbounded regions.
Suppose we are given a holomorphic function f {\displaystyle f} and an unbounded region S {\displaystyle S} , and we want to show that | f | ≤ M {\displaystyle |f|\leq M} on S {\displaystyle S} . In a typical Phragmén–Lindelöf argument, we introduce a certain multiplicative factor h ϵ {\displaystyle h_{\epsilon }} satisfying lim ϵ → 0 h ϵ = 1 {\textstyle \lim _{\epsilon \to 0}h_{\epsilon }=1} to "subdue" the growth of f {\displaystyle f} . In particular, h ϵ {\displaystyle h_{\epsilon }} is chosen such that (i): f h ϵ {\displaystyle fh_{\epsilon }} is holomorphic for all ϵ > 0 {\displaystyle \epsilon >0} and | f h ϵ | ≤ M {\displaystyle |fh_{\epsilon }|\leq M} on the boundary ∂ S b d d {\displaystyle \partial S_{\mathrm {bdd} }} of an appropriate bounded subregion S b d d ⊂ S {\displaystyle S_{\mathrm {bdd} }\subset S} ; and (ii): the asymptotic behavior of f h ϵ {\displaystyle fh_{\epsilon }} allows us to establish that | f h ϵ | ≤ M {\displaystyle |fh_{\epsilon }|\leq M} for z ∈ S ∖ S b d d ¯ {\displaystyle z\in S\setminus {\overline {S_{\mathrm {bdd} }}}} (i.e., the unbounded part of S {\displaystyle S} outside the closure of the bounded subregion). This allows us to apply the maximum modulus principle to first conclude that | f h ϵ | ≤ M {\displaystyle |fh_{\epsilon }|\leq M} on S b d d ¯ {\displaystyle {\overline {S_{\mathrm {bdd} }}}} and then extend the conclusion to all z ∈ S {\displaystyle z\in S} . Finally, we let ϵ → 0 {\displaystyle \epsilon \to 0} so that f ( z ) h ϵ ( z ) → f ( z ) {\displaystyle f(z)h_{\epsilon }(z)\to f(z)} for every z ∈ S {\displaystyle z\in S} in order to conclude that | f | ≤ M {\displaystyle |f|\leq M} on S {\displaystyle S} .
In the literature of complex analysis, there are many examples of the Phragmén–Lindelöf principle applied to unbounded regions of differing types, and also a version of this principle may be applied in a similar fashion to subharmonic and superharmonic functions.
To continue the example above, we can impose a growth condition on a holomorphic function f {\displaystyle f} that prevents it from "blowing up" and allows the Phragmén–Lindelöf principle to be applied. To this end, we now include the condition that
for some real constants c < 1 {\displaystyle c<1} and A < ∞ {\displaystyle A<\infty } , for all z ∈ S {\displaystyle z\in S} . It can then be shown that | f ( z ) | ≤ 1 {\displaystyle |f(z)|\leq 1} for all z ∈ ∂ S {\displaystyle z\in \partial S} implies that | f ( z ) | ≤ 1 {\displaystyle |f(z)|\leq 1} in fact holds for all z ∈ S {\displaystyle z\in S} . Thus, we have the following proposition:
Proposition. Let
Let f {\displaystyle f} be holomorphic on S {\displaystyle S} and continuous on S ¯ {\displaystyle {\overline {S}}} , and suppose there exist real constants c < 1 , A < ∞ {\displaystyle c<1,\ A<\infty } such that
for all z ∈ S {\displaystyle z\in S} and | f ( z ) | ≤ 1 {\displaystyle |f(z)|\leq 1} for all z ∈ S ¯ ∖ S = ∂ S {\displaystyle z\in {\overline {S}}\setminus S=\partial S} . Then | f ( z ) | ≤ 1 {\displaystyle |f(z)|\leq 1} for all z ∈ S {\displaystyle z\in S} .
Note that this conclusion fails when c = 1 {\displaystyle c=1} , precisely as the motivating counterexample in the previous section demonstrates. The proof of this statement employs a typical Phragmén–Lindelöf argument: [ 2 ]
Proof: (Sketch) We fix b ∈ ( c , 1 ) {\displaystyle b\in (c,1)} and define for each ϵ > 0 {\displaystyle \epsilon >0} the auxiliary function h ϵ {\displaystyle h_{\epsilon }} by h ϵ ( z ) = e − ϵ ( e b z + e − b z ) {\textstyle h_{\epsilon }(z)=e^{-\epsilon (e^{bz}+e^{-bz})}} . Moreover, for a given a > 0 {\displaystyle a>0} , we define S a {\displaystyle S_{a}} to be the open rectangle in the complex plane enclosed within the vertices { a ± i π / 2 , − a ± i π / 2 } {\displaystyle \{a\pm i\pi /2,-a\pm i\pi /2\}} . Now, fix ϵ > 0 {\displaystyle \epsilon >0} and consider the function f h ϵ {\displaystyle fh_{\epsilon }} . Because one can show that | h ϵ ( z ) | ≤ 1 {\displaystyle |h_{\epsilon }(z)|\leq 1} for all z ∈ S ¯ {\displaystyle z\in {\overline {S}}} , it follows that | f ( z ) h ϵ ( z ) | ≤ 1 {\displaystyle |f(z)h_{\epsilon }(z)|\leq 1} for z ∈ ∂ S {\displaystyle z\in \partial S} . Moreover, one can show for z ∈ S ¯ {\displaystyle z\in {\overline {S}}} that | f ( z ) h ϵ ( z ) | → 0 {\displaystyle |f(z)h_{\epsilon }(z)|\to 0} uniformly as | ℜ ( z ) | → ∞ {\displaystyle |\Re (z)|\to \infty } . This allows us to find an x 0 {\displaystyle x_{0}} such that | f ( z ) h ϵ ( z ) | ≤ 1 {\displaystyle |f(z)h_{\epsilon }(z)|\leq 1} whenever z ∈ S ¯ {\displaystyle z\in {\overline {S}}} and | ℜ ( z ) | ≥ x 0 {\displaystyle |\Re (z)|\geq x_{0}} . Now consider the bounded rectangular region S x 0 {\displaystyle S_{x_{0}}} . We have established that | f ( z ) h ϵ ( z ) | ≤ 1 {\displaystyle |f(z)h_{\epsilon }(z)|\leq 1} for all z ∈ ∂ S x 0 {\displaystyle z\in \partial S_{x_{0}}} . Hence, the maximum modulus principle implies that | f ( z ) h ϵ ( z ) | ≤ 1 {\displaystyle |f(z)h_{\epsilon }(z)|\leq 1} for all z ∈ S x 0 ¯ {\displaystyle z\in {\overline {S_{x_{0}}}}} . Since | f ( z ) h ϵ ( z ) | ≤ 1 {\displaystyle |f(z)h_{\epsilon }(z)|\leq 1} also holds whenever z ∈ S {\displaystyle z\in S} and | ℜ ( z ) | > x 0 {\displaystyle |\Re (z)|>x_{0}} , we have in fact shown that | f ( z ) h ϵ ( z ) | ≤ 1 {\displaystyle |f(z)h_{\epsilon }(z)|\leq 1} holds for all z ∈ S {\displaystyle z\in S} . Finally, because f h ϵ → f {\displaystyle fh_{\epsilon }\to f} as ϵ → 0 {\displaystyle \epsilon \to 0} , we conclude that | f ( z ) | ≤ 1 {\displaystyle |f(z)|\leq 1} for all z ∈ S {\displaystyle z\in S} . Q.E.D.
A particularly useful statement proved using the Phragmén–Lindelöf principle bounds holomorphic functions on a sector of the complex plane if it is bounded on its boundary. This statement can be used to give a complex analytic proof of the Hardy's uncertainty principle , which states that a function and its Fourier transform cannot both decay faster than exponentially. [ 3 ]
Proposition. Let F {\displaystyle F} be a function that is holomorphic in a sector
of central angle β − α = π / λ {\displaystyle \beta -\alpha =\pi /\lambda } , and continuous on its boundary. If
for z ∈ ∂ S {\displaystyle z\in \partial S} , and
for all z ∈ S {\displaystyle z\in S} , where ρ ∈ [ 0 , λ ) {\displaystyle \rho \in [0,\lambda )} and C > 0 {\displaystyle C>0} , then | F ( z ) | ≤ 1 {\displaystyle |F(z)|\leq 1} holds also for all z ∈ S {\displaystyle z\in S} .
The condition ( 2 ) can be relaxed to
with the same conclusion.
In practice the point 0 is often transformed into the point ∞ of the Riemann sphere . This gives a version of the principle that applies to strips, for example bounded by two lines of constant real part in the complex plane. This special case is sometimes known as Lindelöf's theorem .
Carlson's theorem is an application of the principle to functions bounded on the imaginary axis. | https://en.wikipedia.org/wiki/Phragmén–Lindelöf_principle |
Phreatic is a term used in hydrology to refer to aquifers, in speleology to refer to cave passages, and in volcanology to refer to a type of volcanic eruption.
The term phreatic (the word originates from the Greek phrear , phreat- meaning "well" or "spring") is used in hydrology and the earth sciences to refer to matters relating to groundwater (an aquifer ) below the water table . The term 'phreatic surface' indicates the location where the pore water pressure is under atmospheric conditions (i.e., the pressure head is zero). This surface usually coincides with the water table . The slope of the phreatic surface is assumed to indicate the direction of groundwater movement in an unconfined aquifer .
The phreatic zone , below the phreatic surface where rock and soil are saturated with water, is the counterpart of the vadose zone , or unsaturated zone, above. Unconfined aquifers are also called phreatic aquifers because the phreatic surface provides their upper boundary.
In speleogenesis , a division of speleology, 'phreatic action' forms cave passages by dissolving the limestone in all directions, [ 1 ] as opposed to ' vadose action', whereby a stream running in a cave passage erodes a trench in the floor. [ 2 ] It occurs when the passage is full of water, and therefore normally only when it is below the water table, and only if the water is not saturated with calcium carbonate or calcium magnesium carbonate . A cave passage formed in this way is characteristically circular or oval in cross-section as limestone is dissolved on all surfaces. [ 3 ]
Many cave passages are formed by a combination of phreatic action followed by vadose action. Such passages form a keyhole cross-section: a round-shaped section at the top and a rectangular trench at the bottom.
A phreatic or steam-blast eruption occurs when magma heats ground or surface water.
Phreatobites are animals living within the phreatic zone of groundwater aquifers.
Phreatophytes are deep-rooted plants that obtain a significant portion of the water that it needs from the phreatic zone or near it.
This hydrology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Phreatic |
The top flow line of a saturated soil mass below which seepage takes place, is called the Phreatic line.
Hydrostatic pressure acts below the phreatic line whereas atmospheric pressure exists above the phreatic line. This line separates a saturated soil mass from an unsaturated soil mass. It is not an equipotential line, but a flow line.
For an earthen dam, the phreatic line approximately assumes the shape of a parabola.
This article about a dam or floodgate is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Phreatic_line |
Phred ( Ph il 's Re ad Ed itor [ 1 ] ) is a computer program for base calling , that is to say, identifying a nucleobase sequence from fluorescence "trace" data generated by an automated DNA sequencer that uses electrophoresis and 4-fluorescent dye method. [ 2 ] [ 3 ] When originally developed, Phred produced significantly fewer errors in the data sets examined than other methods, averaging 40–50% fewer errors. Phred quality scores have become widely accepted to characterize the quality of DNA sequences, and can be used to compare the efficacy of different sequencing methods.
The fluorescent-dye DNA sequencing is a molecular biology technique that involves labeling single-strand DNA sequences of varied length with 4 fluorescent dyes (corresponding to 4 different bases used in DNA) and subsequently separating the DNA sequences by "slab gel"- or capillary- electrophoresis method (see DNA Sequencing ). The electrophoresis run is monitored by a CCD on the DNA sequencer and this produces a time "trace" data (or " chromatogram ") of the fluorescent "peaks" that passed the CCD point. Examining the fluorescence peaks in the trace data, we can determine the order of individual bases ( nucleobase ) in the DNA . Since the intensity, shape and the location of a fluorescence peak are not always consistent or unambiguous, however, sometimes it is difficult or time-consuming to determine (or "call") the correct bases for the peaks accurately if it is done manually.
Automated DNA sequencing techniques have revolutionized the field of molecular biology – generating vast amounts of DNA sequence data. However, the sequence data is produced at a significantly higher rate than can be manually processed (i.e. interpreting the trace data to produce the sequence data), thereby creating a bottleneck. To remove the bottleneck, both automated software that can speed up the processing with improved accuracy and a reliable measure of the accuracy are needed. To meet this need, many software programs have been developed. One such program is Phred.
Phred was originally conceived in the early 1990s by Phil Green , then a professor at Washington University in St. Louis . LaDeana Hillier , Michael Wendl , David Ficenec, Tim Gleeson, Alan Blanchard, and Richard Mott also contributed to the codebase and algorithm. Green moved to University of Washington in the mid 1990s, after which development was primarily managed by himself and Brent Ewing. Phred played a notable role in the Human Genome Project , where large amounts of sequence data were processed by automated scripts. It was at the time the most widely used base-calling software program by both academic and commercial DNA sequencing laboratories because of its high base calling accuracy. [ 4 ] Phred is distributed commercially by CodonCode Corporation , and used to perform the "Call bases" function in the program CodonCode Aligner . It is also used by the MacVector plugin Assembler.
Phred uses a four-phase procedure as outlined by Ewing et al. to determine a sequence of base calls from the processed DNA sequence tracing:
The entire procedure is rapid, usually taking less than half a second per trace. The results can be output as a PHD file, which contains base data as triples consisting of the base call, quality, and position. [ 5 ]
Phred is often used together with another software program called Phrap , which is a program for DNA sequence assembly. Phrap was routinely used in some of the largest sequencing projects in the Human Genome Sequencing Project and is currently one of the most widely used DNA sequence assembly programs in the biotech industry. Phrap uses Phred quality scores to determine highly accurate consensus sequences and to estimate the quality of the consensus sequences. Phrap also uses Phred quality scores to estimate whether discrepancies between two overlapping sequences are more likely to arise from random errors, or from different copies of a repeated sequence. | https://en.wikipedia.org/wiki/Phred_(software) |
A Phred quality score is a measure of the quality of the identification of the nucleobases generated by automated DNA sequencing . [ 1 ] [ 2 ] It was originally developed for the computer program Phred to help in the automation of DNA sequencing in the Human Genome Project . Phred quality scores are assigned to each nucleotide base call in automated sequencer traces. [ 1 ] [ 2 ] The FASTQ format encodes phred scores as ASCII characters alongside the read sequences. Phred quality scores have become widely accepted to characterize the quality of DNA sequences, and can be used to compare the efficacy of different sequencing methods. Perhaps the most important use of Phred quality scores is the automatic determination of accurate, quality-based consensus sequences .
Phred quality scores Q {\displaystyle Q} are logarithmically related to the base-calling error probabilities P {\displaystyle P} and defined as [ 2 ]
Q = − 10 log 10 P {\displaystyle Q=-10\ \log _{10}P} .
This relation can also be written as
P = 10 − Q 10 {\displaystyle P=10^{\frac {-Q}{10}}} .
For example, if Phred assigns a quality score of 30 to a base, the chances that this base is called incorrectly are 1 in 1000.
The phred quality score is the negative ratio of the error probability to the reference level of P = 1 {\displaystyle P=1} expressed in Decibel (dB) .
The idea of sequence quality scores can be traced back to the original description of the SCF file format by Rodger Staden 's group in 1992. [ 3 ] In 1995, Bonfield and Staden proposed a method to use base-specific quality scores to improve the accuracy of consensus sequences in DNA sequencing projects. [ 4 ]
However, early attempts to develop base-specific quality scores [ 5 ] [ 6 ] had only limited success.
The first program to develop accurate and powerful base-specific quality scores was the program Phred . Phred was able to calculate highly accurate quality scores that were logarithmically linked to the error probabilities. Phred was quickly adopted by all the major genome sequencing centers as well as many other laboratories; the vast majority of the DNA sequences produced during the Human Genome Project were processed with Phred.
After Phred quality scores became the required standard in DNA sequencing, other manufacturers of DNA sequencing instruments, including Li-Cor and ABI , developed similar quality scoring metrics for their base calling software. [ 7 ]
Phred's approach to base calling and calculating quality scores was outlined by Ewing et al. . To determine quality scores, Phred first calculates several parameters related to peak shape and peak resolution at each base. Phred then uses these parameters to look up a corresponding quality score in huge lookup tables. These lookup tables were generated from sequence traces where the correct sequence was known, and are hard coded in Phred; different lookup tables are used for different sequencing chemistries and machines. An evaluation of the accuracy of Phred quality scores for a number of variations in sequencing chemistry and instrumentation showed that Phred quality scores are highly accurate. [ 8 ]
Phred was originally developed for "slab gel" sequencing machines like the ABI373. When originally developed, Phred had a lower base calling error rate than the manufacturer's base calling software, which also did not provide quality scores. However, Phred was only partially adapted to the capillary DNA sequencers that became popular later. In contrast, instrument manufacturers like ABI continued to adapt their base calling software changes in sequencing chemistry, and have included the ability to create Phred-like quality scores. Therefore, the need to use Phred for base calling of DNA sequencing traces has diminished, and using the manufacturer's current software versions can often give more accurate results.
Phred quality scores are used for assessment of sequence quality, recognition and removal of low-quality sequence (end clipping), and determination of accurate consensus sequences.
Originally, Phred quality scores were primarily used by the sequence assembly program Phrap . Phrap was routinely used in some of the largest sequencing projects in the Human Genome Sequencing Project and is currently one of the most widely used DNA sequence assembly programs in the biotech industry. Phrap uses Phred quality scores to determine highly accurate consensus sequences and to estimate the quality of the consensus sequences. Phrap also uses Phred quality scores to estimate whether discrepancies between two overlapping sequences are more likely to arise from random errors, or from different copies of a repeated sequence.
Within the Human Genome Project , the most important use of Phred quality scores was for automatic determination of consensus sequences. Before Phred and Phrap, scientists had to carefully look at discrepancies between overlapping DNA fragments; often, this involved manual determination of the highest-quality sequence, and manual editing of any errors. Phrap's use of Phred quality scores effectively automated finding the highest-quality consensus sequence; in most cases, this completely circumvents the need for any manual editing. As a result, the estimated error rate in assemblies that were created automatically with Phred and Phrap is typically substantially lower than the error rate of manually edited sequence.
In 2009, many commonly used software packages make use of Phred quality scores, albeit to a different extent. Programs like Sequencher use quality scores for display, end clipping, and consensus determination; other programs like CodonCode Aligner also implement quality-based consensus methods.
Quality scores are normally stored together with the nucleotide sequence in the widely accepted FASTQ format . They account for about half of the required disk space in the FASTQ format (before compression), and therefore the compression of the quality values can significantly reduce storage requirements and speed up analysis and transmission of sequencing data. Both lossless and lossy compression are recently being considered in the literature. For example, the algorithm QualComp [ 9 ] performs lossy compression with a rate (number of bits per quality value) specified by the user. Based on rate-distortion theory results, it allocates the number of bits so as to minimize the MSE (mean squared error) between the original (uncompressed) and the reconstructed (after compression) quality values. Other algorithms for compression of quality values include SCALCE, [ 10 ] Fastqz [ 11 ] and more recently QVZ, [ 12 ] AQUa [ 13 ] and the MPEG-G standard, that is currently under development by the MPEG standardisation working group. Both are lossless compression algorithms that provide an optional controlled lossy transformation approach. For example, SCALCE reduces the alphabet size based on the observation that “neighboring” quality values are similar in general. | https://en.wikipedia.org/wiki/Phred_quality_score |
pHT01 is a plasmid used as a cloning vector for expressing proteins in Bacillus subtilis . It is 7,956 base pairs in length. [ 1 ] pHT01 carries Pgrac, an artificial, strong, IPTG -inducible promoter consisting of the Bacillus subtilis groE promoter, a lac operator , and the gsiB ribosome binding site . It was first found on plasmid pNDH33. The plasmid also carries replication regions from the pMTLBs72. [ 2 ] The plasmid also carries genes to confer resistance to ampicillin and chloramphenicol .
Plasmid pHT01 is generally stable in both B. subtilis and Escherichia coli , and can be used for protein expression in these host strains. pNDH33/pHT01 have been used to produce up to 16% of total protein output in B. subtilis . P grac 100 is an improved version of P grac , which can produce up to 30% of total cellular proteins in B. subtilis . [ 3 ]
This biochemistry article is a stub . You can help Wikipedia by expanding it .
This molecular biology article is a stub . You can help Wikipedia by expanding it .
This bacilli -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pht01 |
Phthalates ( US : / ˈ θ æ l eɪ t s / UK : / ˈ ( f ) θ æ l eɪ t s ˌ ˈ ( f ) θ æ l ɪ t s / [ 1 ] [ 2 ] ), or phthalate esters , are esters of phthalic acid . They are mainly used as plasticizers , i.e., substances added to plastics to increase their flexibility, transparency, durability, and longevity. They are used primarily to soften polyvinyl chloride (PVC). While phthalates are commonly used as plasticizers, not all plasticizers are phthalates. The two terms are specific, unique, and not used interchangeably.
Lower-molecular-weight phthalates are typically replaced in many products in the United States, Canada, and European Union over health concerns. [ 3 ] [ 4 ] They are being replaced by higher molecular-weight phthalates as well as non-phthalic plasticizers.
Phthalates are commonly ingested in small quantities via the diet. There are numerous forms of phthalates not regulated by governments. One of the most commonly known phthalates is bis(2-ethylhexyl) phthalate (DEHP). In many countries, DEHP is regulated as a toxin , and is banned from use in broad categories of consumer goods, such as cosmetics , children's toys, medical devices, and food packaging.
Phthalate esters are produced industrially by the reaction of phthalic anhydride with excess alcohol . Often the phthalic anhydride is molten. The monoesterification occurs readily, but the second step is slow:
The conversion is conducted at high temperatures to drive off the water. Typical catalysts are based on tin or titanium alkoxides or carboxylates. [ 5 ]
The properties of the phthalate can be varied by changing the alcohol. [ 6 ] Around 30 are, or have been, commercially important. Phthalates' share of the global plasticisers market has been decreasing since around 2000 however total production has been increasing, with around 5.5 million tonnes made in 2015, [ 7 ] up from around 2.7 million tonnes in the 1980s. [ 8 ] The explanation for this is the increasing size of the plasticiser market, largely due driven by increases in PVC production, which nearly doubled between 2000 and 2020. [ 9 ] The People's Republic of China is the largest consumer, accounting for around 45% of all use. Europe and the United States together account for around 25% of use, with the remainder widely spread around the world. [ 7 ]
Between 90 and 95% of all phthalates are used as plasticisers for the production of flexible PVC. [ 11 ] [ 12 ] They were the first commercially important compounds for this role, [ 13 ] a historic advantage that has led to them becoming firmly embedded in flexible PVC technology. [ 14 ] Among the common plastics , PVC is unique in its acceptance of large amounts of plasticizer with gradual changes in physical properties from a rigid solid to a soft gel. [ 14 ] Phthalates derived from alcohols with 7-13 carbon atoms occupy a privileged position as general purpose plasticizers, suitable for almost all flexible PVC applications. [ 15 ] [ 14 ] Phthalates larger than this have limited compatibility in PVC, with di(isotridecyl) phthalate representing the practical upper limit. Conversely, plasticizers derived from alcohols with 4-6 carbon atoms are too volatile to be used on their own, but have been used alongside other compounds as secondary plasticizers, where they improve low-temperature flexibility. Compounds derived from alcohols with 1-3 carbon atoms are not used as plasticizers in PVC at all, due to excessive fuming at processing temperatures (typically 180-210 °C). [ 14 ]
Historically DINP, DEHP , BBP, DBP, and DIHP have been the most important phthalates, however many of these are now facing regulatory pressure and gradual phase-outs. Almost all phthalates derived from alcohols with between 3 and 8 carbons are classed as toxic by ECHA . This includes Bis(2-ethylhexyl) phthalate (DEHP or DOP), which has long been the most widely used phthalate, with commercial production dating back to the 1930s. [ 16 ] [ 17 ] In the EU, the use of DEHP is restricted under REACH and it can only be used in specific cases if an authorisation has been granted; similar restrictions exist in many other jurisdictions. Despite this, the phase-out of DEHP is slow and it was still the most frequently used plasticizer in 2018, with an estimated global production of 3.24 million tonnes. [ 17 ] DINP and DIDP are used as a substitutes for DEHP in many applications, as they are not classified as hazardous. [ 18 ] Non-phthalate plasticizers are also being increasingly used.
Almost 90% of all plasticizers are used in PVC, giving this material improved flexibility and durability. [ 19 ] The majority is used in films and cable sheathing. [ 17 ] Flexible PVC can consist of over 85% plasticizer by mass, however unplasticized PVC (UPVC) should not contain any.
Phthalates see use as plasticisers in various other polymers, with applications centred around coatings such as lacquers, varnishes, and paints. The addition of phthalates imparts some flexibility to these materials, reducing their tendency to chip.
Phthalates derived from alcohols with between 1-4 carbon atoms are used as plasticisers for cellulose -type plastics, such as cellulose acetate , nitrocellulose and cellulose acetate butyrate , with commonly encountered applications including nail polish . Most phthalates are also compatible with alkyds and acrylic resins , which are used in both oil and emulsion based paints. [ citation needed ]
Other plasticised polymer systems include polyvinyl butyral (particularly the forms used to make laminated glass ), PVA and its co-polymers like PVCA . They are also compatible in nylon , polystyrene , polyurethanes , and certain rubbers ; but their use in these is very limited. [ 19 ]
Phthalates can plasticise ethyl cellulose , polyvinyl acetate phthalate (PVAP) and cellulose acetate phthalate (CAP), all of which are used to make enteric coatings for tablet and capsule medications. These coatings protect drugs from the acidity of the stomach, but allow their release and absorption in the intestines. [ citation needed ]
Phthalate esters are widely used as solvents for highly reactive organic peroxides . Thousands of tonnes are consumed annually for this purpose. The great advantage offered by these esters is that they are phlegmatizers , i.e. they minimize the explosive tendencies of a family of chemical compounds that otherwise are potentially dangerous to handle. [ 21 ] Phthalates have also been used for producing plastic explosives such as Semtex . [ citation needed ]
Relatively minor amounts of some phthalates find use in personal-care items such as eye shadow, moisturizer, nail polish, liquid soap, and hair spray. [ 22 ] [ 23 ] [ 5 ] Low-molecular-weight phthalates like dimethyl phthalate and diethyl phthalate are used as fixatives for perfumes. [ 24 ] [ 25 ] Dimethyl phthalate has been also used as an insect repellent and is especially useful against ixodid ticks responsible for Lyme disease . [ 26 ] and species of mosquitoes such as Anopheles stephensi , Culex pipiens and Aedes aegypti , [ 27 ] [ 28 ] [ 29 ]
Diallyl phthalate is used to prepare vinyl ester resins with good electrical insulation properties. These resins are used to manufacture of electronics components. [ citation needed ]
The development of cellulose nitrate plastic in 1846 led to the patent of castor oil in 1856 for use as the first plasticizer. In 1870, camphor became the more favored plasticizer for cellulose nitrate. Phthalates were first introduced in the 1920s and quickly replaced the volatile and odorous camphor. In 1931, the commercial availability of polyvinyl chloride (PVC) and the development of di(2-ethylhexyl) phthalate (DEHP) began the boom of the plasticizer PVC industry. [ citation needed ]
Phthalate esters usually refers to di alkyl esters of phthalic acid (also called 1,2-benzenedicarboxylic acid, not be confused with the structurally isomeric terephthalic or isophthalic acids); the name "phthalate" derives from phthalic acid , which itself is derived from the word " naphthalene ". When added to plastics, phthalates allow the polyvinyl polymers to slide against one another. The phthalates have a clear syrupy liquid consistency and show low water solubility, high oil solubility, and low volatility. The polar carboxyl group contributes little to the physical properties of the phthalates, except when R and R' are very small (such as ethyl or methyl groups). Phthalates are colorless, odorless liquids produced by the reaction of phthalic anhydride with alcohols . [ citation needed ]
The mechanism by which phthalates and related compounds plasticize polar polymers has been a subject of intense study since the 1960s. [ 30 ] The mechanism is one of polar interactions between the polar centres of the phthalate molecule (the C=O functionality) and the positively charged areas of the vinyl chain, typically residing on the carbon atom of the carbon-chlorine bond. For this to be established, the polymer must be heated in the presence of the plasticizer, first above the Tg of the polymer and then into a melt state. This enables an intimate mix of polymer and plasticizer to be formed, and for these interactions to occur. When cooled, these interactions remain and the network of PVC chains cannot reform (as is present in unplasticized PVC, or PVC-U). The alkyl chains of the phthalate then screen the PVC chains from each other as well. They are blended within the plastic article as a result of the manufacturing process. [ 31 ]
Because they are not chemically bonded to the host plastics , phthalates are released from the plastic article by relatively gentle means. For example, they can be extracted by extraction with organic solvents and, to some extent, by handling. [ citation needed ]
Being inexpensive, nontoxic (in an acute sense), colorless, noncorrosive, biodegradable, and with easily tuned physical properties, phthalate esters are nearly ideal plasticizers. Among the numerous alternative plasticizers are dioctyl terephthalate (DEHT) (a terephthalate isomeric with DEHP) and 1,2-cyclohexane dicarboxylic acid diisononyl ester (DINCH) (a hydrogenated version of DINP). Both DEHT and DINCH have been used in high volumes for a variety of products used in contact with humans as alternative plasticizers for DEHP and DINP. Some of these products include medical devices, toys, and food packaging . [ 32 ] DEHT and DINCH are more hydrophobic than other phthalate alternatives such as bis(2-ethylhexyl) adipate (DEHA) and diisodecyl adipate (DIDA). Since alternative plasticizers such as DEHT and DINCH are more likely to bind to organic matter and airborne particles indoors, exposure occurs primarily through food consumption and contact with dust. [ 32 ]
Many bio-based plasticizers based on vegetable oil have been developed. [ 33 ]
Due to the ubiquity of plasticized plastics, people are often exposed to phthalates. For example, most Americans tested by the Centers for Disease Control and Prevention have metabolites of multiple phthalates in their urine. [ 34 ] Exposure to phthalates is more likely in women and people of color. [ 35 ] Differences were found between Mexican-Americans, blacks, and whites in terms of the overall risk of disturbance of glucose homeostasis. With Mexican-Americans having a fasting blood glucose (FBG) increase of 5.82 mg/dL, blacks having a fasting blood glucose increase of 3.63 mg/dL, and whites having a fasting blood glucose increase of 1.79 mg/dL, there was evidence of an increased risk for minorities. [ 35 ] Overall, the study concludes that phthalates may alter glucose homeostasis and insulin sensitivity . Higher levels of some phthalate metabolites were associated with elevated FBG, fasting insulin, and insulin resistance. Non-Hispanic black women and Hispanic women have higher levels of some phthalate metabolites. [ 36 ]
Higher dust concentrations of DEHP were found in homes of children with asthma and allergies, compared with healthy children's homes. [ 37 ] The author of the study stated, "The concentration of DEHP was found to be significantly associated with wheezing in the last 12 months as reported by the parents." [ 37 ] Phthalates were found in almost every sampled home in Bulgaria. The same study found that DEHP, BBzP, and DnOP were in significantly higher concentrations in dust samples collected in homes where polishing agents were used. Data on flooring materials was collected, but there was not a significant difference in concentrations between homes where no polish was used that have balatum (PVC or linoleum) flooring and homes with wood. High frequency of dusting did decrease the concentration. [ 37 ]
In general, children's exposure to phthalates is greater than that of adults. In a 1990s Canadian study that modeled ambient exposures, it was estimated that daily exposure to DEHP was 9 μg/kg bodyweight/day in infants, 19 μg/kg bodyweight/day in toddlers, 14 μg/kg bodyweight/day in children, and 6 μg/kg bodyweight/day in adults. [ 38 ] Infants and toddlers are at the greatest risk of exposure, because of their mouthing behavior. Body-care products containing phthalates are a source of exposure for infants. The authors of a 2008 study "observed that reported use of infant lotion, infant powder, and infant shampoo were associated with increased infant urine concentrations of [phthalate metabolites], and this association is strongest in younger infants. These findings suggest that dermal exposures may contribute significantly to phthalate body burden in this population." Although they did not examine health outcomes, they noted that "Young infants are more vulnerable to the potential adverse effects of phthalates given their increased dosage per unit body surface area, metabolic capabilities, and developing endocrine and reproductive systems." [ 39 ]
Infants and hospitalized children are particularly susceptible to phthalate exposure. Medical devices and tubing may contain 20–40% Di(2-ethylhexyl) phthalate (DEHP) by weight, which "easily leach out of tubing when heated (as with warm saline / blood)". [ 40 ] Several medical devices contain phthalates including, but not limited to, IV tubing, gloves, nasogastric tubes, and respiratory tubing. The Food and Drug Administration did an extensive risk assessment of phthalates in the medical setting and found that neonates may be exposed to five times greater than the allowed daily tolerable intake. This finding led to the conclusion by the FDA that, "[c]hildren undergoing certain medical procedures may represent a population at increased risk for the effects of DEHP". [ 40 ]
In 2008, the Danish Environmental Protection Agency (EPA) found a variety of phthalates in erasers and warned of health risks when children regularly suck and chew on them. The European Commission Scientific Committee on Health and Environmental Risks (SCHER), however, considers that, even in the case when children bite off pieces from erasers and swallow them, it is unlikely that this exposure leads to health consequences. [ 41 ]
In 2008, the United States National Research Council recommended that the cumulative effects of phthalates and other antiandrogens be investigated. It criticized U.S. EPA guidances, which stipulate that, when examining cumulative effects, the chemicals examined should have similar mechanisms of action or similar structures, as too restrictive. It recommended instead that the effects of chemicals that cause similar adverse outcomes should be examined cumulatively. [ 42 ] Thus, the effect of phthalates should be examined together with other antiandrogens, which otherwise may have been excluded because their mechanisms or structure are different. [ citation needed ]
Phthalates are found in food, [ 43 ] especially fast food items. Phthalate DnBP was detected in 81 percent of the samples, while DEHP was found in 70 percent. Diethylhexyl terephthalate (DEHT), the main alternative to DEHP, was detected in 86%. [ 44 ] A 2024 study by Consumer Reports found phthalates in all but one of the grocery store products and fast foods they tested. [ 45 ]
Diet is believed to be the main source of DEHP and other phthalates in the general population. Fatty foods such as milk, butter, and meats are a major source. Studies show that exposure to phthalates is greater from ingestion of certain foods, rather than exposure via water bottles, as is most often first thought of with plastic chemicals. [ 46 ] Low-molecular-weight phthalates such as DEP, DBP, BBzP may be dermally absorbed. Inhalational exposure is also significant with the more volatile phthalates. [ 38 ] PVC tubing, vinyl gloves used in food handling, and food packaging may serve as potential sources of phthalate contamination in fast food. [ 47 ]
One study, conducted between 2003 and 2010 analysing data from 9,000 individuals, found that those who reported that they had eaten at a fast food restaurant had much higher levels of two separate phthalates—DEHP and DiNP—in their urine samples. Even small consumption of fast food caused higher presence of phthalates. "People who reported eating only a little fast food had DEHP levels that were 15.5 percent higher and DiNP levels that were 25 percent higher than those who said they had eaten none. For people who reported eating a sizable amount, the increase was 24 percent and 39 percent, respectively." [ 48 ] Phthalates have a short half-life of less than five hours, so their widespread presence likely indicates continuous exposure rather than long-term accumulation in the body. [ 49 ]
Outdoor air concentrations are higher in urban and suburban areas than in rural and remote areas. [ 50 ] They also pose no acute toxicity. [ 21 ]
Common plasticizers such as DEHP are only weakly volatile. Higher air temperatures result in higher concentrations of phthalates in the air. PVC flooring leads to higher concentrations of BBP and DEHP, which are more prevalent in dust. [ 50 ] A 2012 Swedish study of children found that phthalates from PVC flooring were taken up into their bodies, showing that children can ingest phthalates not only from food but also by breathing and through the skin. [ 51 ]
Various plants and microorganisms produce small amounts of phthalate esters, the so-called endogenous phthalates. [ 52 ] [ 53 ] Biosynthesis is believed to involve a modified Shikimate pathway [ 54 ] [ 55 ] The extent of this natural production is not fully known, but it may create a background of phthalate pollution.
Phthalates do not persist due to rapid biodegradation , photodegradation , and anaerobic degradation . [ 56 ] [ failed verification – see discussion ]
Phthalates are under research as a class of possible endocrine disruptors , substances that may interfere with normal hormonal responses in varied environmental conditions. [ 57 ] [ 58 ] [ 59 ] The concern has sparked demands to ban or restrict the use of phthalates in baby toys. [ 60 ]
A 2024 review indicated that exposure of mothers to environmental phthalates may have adverse pregnancy outcomes, such as a higher miscarriage rate and lower birth weights. [ 57 ] Another review showed small reductions in lung function in adolescents and children who had been exposed to phthalates. [ 61 ]
A 2017 review indicated ways to avoid exposure to phthalates: [ 62 ] (1) eating a balanced diet to avoid ingesting too many endocrine disruptors from a single source, (2) eliminating canned or packaged food in order to limit ingestion of DEHP phthalates leached from plastics, and (3) eliminating use of any personal product such as moisturizer, perfume, or cosmetics that contain phthalates. [ 62 ] Exposure to phthalates may increase the risk of asthma . [ 63 ]
A 2018 study indicated that exposure to phthalates during developmental stages in childhood may negatively affect adipose tissue function and metabolic homeostasis, possibly increasing the risk of obesity. [ 64 ]
The governments of Australia, New Zealand, Canada, the US, and California have determined that many phthalates are not harmful to human health or the environment in amounts typically found, and therefore are legally unregulated. [ 65 ] [ 66 ] [ 67 ] [ 68 ] The focus for regulation in these jurisdictions has been mainly on diethyl phthalate (DEHP), which is generally regarded as a carcinogenic toxin requiring regulation. [ 66 ] [ 67 ] [ 68 ] [ 69 ]
The European Chemicals Agency (European Union, EU) regards ortho-phthalates, such as DEHP, dibutyl phthalate, diisobutyl phthalate, and benzyl butyl phthalate as potentially harmful to fertility, unborn babies, and the endocrine system . [ 70 ] The EU also regulates some phthalates to protect the environment. [ 70 ]
A 2017 survey of foods and packaging in Australia and New Zealand led to recognition of DEHP and diisononyl phthalate as among possible contaminants posing a risk to human health, resulting in several regulations on these phthalates in both countries. [ 65 ] Australia has a permanent ban on certain children's products containing DEHP, which is considered poisonous if products containing it are placed in the mouths of children up to three years old. [ 69 ]
In 1994, a Health Canada assessment found that DEHP and another phthalate product, B79P, were harmful to human health. The Canadian federal government responded by banning their use in cosmetics and restricting their use in other applications, such as soft toys and child-care products. [ 71 ] In 1999, DEHP was put on the national List of Toxic Substances, under the Canadian Environmental Protection Act, 1999 , and in 2021, it was deemed a risk to the environment. [ 66 ] [ 72 ] It is on the List of Ingredients that are Prohibited for Use in Cosmetic Products. [ 72 ]
Twenty of the 28 phthalate substances under national screening programs are considered possible risks to human health or the environment. [ 66 ] As of 2021, regulations to protect the environment against DEHP and B79P have not been enacted. [ 66 ]
Some phthalates have been restricted in the European Union for use in children's toys since 1999. [ 70 ] [ 73 ] DEHP, BBP , and DBP are restricted for all toys; DINP, DIDP, and DNOP are restricted only in toys that can be taken into the mouth. The restriction states that the amount of these phthalates may not be greater than 0.1% mass percent of the plasticized part of the toy. [ citation needed ]
Generally, the high molecular weight phthalates DINP, DIDP, and DPHP have been registered under REACH and have demonstrated their safety for use in current applications. They are not classified for any health or environmental effects.
The low molecular weight products BBP, DEHP, DIBP, and DBP were added to the Candidate list of Substances for Authorisation under REACH in 2008–9, and added to the Authorisation list, Annex XIV, in 2012. [ 3 ] This means that from February 2015 they are not allowed to be produced in the EU unless authorisation has been granted for a specific use, although they may still be imported in consumer products. [ 70 ] [ 74 ] The creation of an Annex XV dossier, which could ban the import of products containing these chemicals, was being prepared jointly by the ECHA and Danish authorities, and expected to be submitted by April 2016. [ 75 ]
Since 2021, the European Waste Framework Directive requires manufacturers, importers and distributors of products containing phthalates on the REACH Candidate List to notify the European Chemicals Agency . [ 70 ]
In November 2021, the European Commission added endocrine disrupting properties to DEHP and other phthalates, meaning that companies must apply for REACH authorization for some uses that were previously exempted, including in food packaging, medical devices, and drug packaging. [ 70 ]
During August 2008, the United States Congress passed and President George W. Bush signed the Consumer Product Safety Improvement Act (CPSIA), which became public law 110–314. [ 80 ] Section 108 of that law specified that as of February 10, 2009, "it shall be unlawful for any person to manufacture for sale, offer for sale, distribute in commerce, or import into the United States any children's toy or child care article that contains concentrations of more than 0.1 percent of" DEHP , DBP , or BBP and "it shall be unlawful for any person to manufacture for sale, offer for sale, distribute in commerce, or import into the United States any children's toy that can be placed in a child's mouth or child care article that contains concentrations of more than 0.1 percent of" DINP , DIDP , and DnOP. Furthermore, the law requires the establishment of a permanent review board to determine the safety of other phthalates. Prior to this legislation, the Consumer Product Safety Commission had determined that voluntary withdrawals of DEHP and diisononyl phthalate (DINP) from teethers, pacifiers, and rattles had eliminated the risk to children, and advised against enacting a phthalate ban. [ 81 ]
In 1986, California voters approved an initiative to address concerns about exposure to toxic chemicals. That initiative became the Safe Drinking Water and Toxic Enforcement Act of 1986, also called Proposition 65. [ 82 ] In December 2013, DINP was listed as a chemical "known to the State of California to cause cancer" [ 83 ] Beginning in December 2014, companies with ten or more employees manufacturing, distributing or selling the product(s) containing DINP were required to provide a clear and reasonable warning for that product. The California Office of Environmental Health Hazard Assessment, charged with maintaining the Proposition 65 list and enforcing its provisions, has implemented a "No Significant Risk Level" of 146 μg/day for DINP. [ 67 ]
The CDC provided a 2011 public health statement on diethyl phthalate describing regulations and guidelines concerning its possible harmful health effects. [ 68 ] Under laws for Superfund sites , the Environmental Protection Agency named diethyl phthalate as a hazardous substance. The Occupational Safety and Health Administration stated that the maximum amount of diethyl phthalate allowed in workroom air during an 8-hour workday, 40-hour workweek, is 5 milligrams per cubic meter. [ 68 ]
Phthalates are used in some, but not all, PVC formulations, and there are no specific labeling requirements for phthalates. PVC plastics are typically used for various containers and hard packaging, medical tubing and bags, and are labeled "Type 3". However, the presence of phthalates rather than other plasticizers is not marked on PVC items. Only unplasticized PVC (uPVC), which is mainly used as a hard construction material, has no plasticizers. If a more accurate test is needed, chemical analysis, for example by gas chromatography or liquid chromatography , can establish the presence of phthalates.
Polyethylene terephthalate (PET, PETE, Terylene, Dacron) is the main substance used to package bottled water and many sodas. Products containing PETE are labeled "Type 1" (with a "1" in the recycle triangle). Although the word "phthalate" appears in the name, PETE does not use phthalates as plasticizers. The terephthalate polymer PETE and the phthalate ester plasticizers are chemically different substances. [ 85 ] Despite this, however, many studies have found phthalates, such as DEHP in bottled water and soda. [ 86 ] One hypothesis is that these may have been introduced during plastic recycling . [ 86 ] | https://en.wikipedia.org/wiki/Phthalates |
"Phycisphaeria" Oren, Parte & Garrity 2016
Phycisphaerae is a class of aquatic bacteria . They reproduce by budding and are found in samples of algae in marine water . [ 1 ] Organisms in this group are spherical [ 1 ] and have a holdfast , at the tip of a thin cylindrical extension from the cell body called the stalk, at the nonreproductive end that helps them to attach to each other during budding.
The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) [ 2 ] and National Center for Biotechnology Information (NCBI) [ 3 ]
Anaerohalosphaera
Limihaloglobus
Sedimentisphaera
Tepidisphaera Kovaleva et al. 2015
Mucisphaera
Poriferisphaera
Algisphaera
Phycisphaera
Anaerohalosphaera Pradel et al. 2020
Limihaloglobus Pradel et al. 2020
Sedimentisphaera Spring et al. 2018
" Humisphaera " Dedysh et al. 2021
Mucisphaera Kallscheuer et al. 2022
Poriferisphaera Kallscheuer et al. 2021
Algisphaera Yoon, Jang & Kasai 2014
Phycisphaera Fukunaga et al.2010
This bacteria -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Phycisphaerae |
Phycobiliproteins are water-soluble proteins present in cyanobacteria and certain algae ( rhodophytes , cryptomonads , glaucocystophytes ). They capture light energy, which is then passed on to chlorophylls during photosynthesis . Phycobiliproteins are formed of a complex between proteins and covalently bound phycobilins that act as chromophores (the light-capturing part). [ 1 ] They are most important constituents of the phycobilisomes .
Many applications and instruments were developed specifically for R-phycoerythrin. It is commonly used in immunoassays such as FACS, flow cytometry, multimer/tetramer applications.
Structural Characteristics
R-phycoerythrin is also produced by certain red algae. The protein is made up of at least three different subunits and varies according to the species of algae that produces it. The subunit structure of the most common R-PE is (αβ) 6 γ. The α subunit has two phycoerythrobilins (PEB), the β subunit has 2 or 3 PEBs and one phycourobilin (PUB), while the different gamma subunits are reported to have 3 PEB and 2 PUB (γ 1 ) or 1 or 2 PEB and 1 PUB (γ 2 ).
(Phycobiliprotein overview information)
(563 nm) 2.33 10 6
Because of its high quantum yield, B-PE is considered the world's brightest fluorophore. It is compatible with commonly available lasers and gives exceptional results in flow cytometry, Luminex and immunofluorescent staining. B-PE is also less "sticky" than common synthetic fluorophores and therefore gives less background interference.
Structural Characteristics
B-phycoerythrin (B-PE) is produced by certain red algae such as Rhodella sp. The specific spectral characteristics are a result of the composition of its subunits. B-PE is composed of at least three subunits and sometimes more. The chromophore distribution is as follows: α subunit with 2 phycoerythrobilins (PEB), β subunit with 3 PEB, and the γ subunit with 2 PEB and 2 phycourobilins (PUB). The quaternary structure is reported as (αβ) 6 γ.
(Phycobiliprotein overview information)
Many applications and instruments were developed specifically for allophycocyanin. It is commonly used in immunoassays such as flow cytometry and high-throughput screening. It is also a common acceptor dye for FRET assays.
Structural Characteristics
Allophycocyanin can be isolated from various species of red or blue-green algae, each producing slightly different forms of the molecule. It is composed of two different subunits (α and β) in which each subunit has one phycocyanobilin (PCB) chromophore. The subunit structure for APC has been determined as (αβ) 3 .
(Phycobiliprotein overview information)
Phycobiliproteins demonstrate superior fluorescent properties compared to small organic fluorophores, especially when high sensitivity or multicolor detection required :
Phycobiliproteins allow very high detection sensitivity, and can be used in various fluorescence based techniques fluorimetric microplate assays Archived 2018-03-18 at the Wayback Machine , [ 7 ] [ 8 ] [ 9 ] FISH and multicolor detection.
They are under development for use in artificial photosynthesis , limited by the relatively low conversion efficiency of 4-5%. [ 10 ] | https://en.wikipedia.org/wiki/Phycobiliprotein |
Phycobilisomes are light-harvesting antennae that transmit the energy of harvested photons to photosystem II and photosystem I in cyanobacteria and in the chloroplasts of red algae and glaucophytes . [ 1 ] [ 2 ] [ 3 ] They were lost during the evolution of the chloroplasts of green algae and plants . [ 3 ]
Phycobilisomes are protein complexes (up to 600 polypeptides ) anchored to thylakoid membranes. They are made of stacks of chromophorylated proteins, the phycobiliproteins , and their associated linker polypeptides. Each phycobilisome consists of a core made of allophycocyanin , from which several outwardly oriented rods made of stacked disks of phycocyanin and (if present) phycoerythrin (s) or phycoerythrocyanin . The spectral property of phycobiliproteins are mainly dictated by their prosthetic groups , which are linear tetrapyrroles known as phycobilins including phycocyanobilin , phycoerythrobilin , phycourobilin and phycobiliviolin . The spectral properties of a given phycobilin are influenced by its protein environment. [ 4 ]
Each phycobiliprotein has a specific absorption and fluorescence emission maximum in the visible range of light. Therefore, their presence and the particular arrangement within the phycobilisomes allow absorption and unidirectional transfer of light energy to chlorophyll a of the photosystem II. In this way, the cells take advantage of the available wavelengths of light (in the 500–650 nm range), which are inaccessible to chlorophyll, and utilize their energy for photosynthesis. This is particularly advantageous deeper in the water column , where light with longer wavelengths is less transmitted and therefore less available directly to chlorophyll.
The geometrical arrangement of a phycobilisome is very elegant in an antenna-like assembly. It results in 95% efficiency of energy transfer . [ 5 ]
There are many variations to the general phycobilisome structure. Their shape can be hemidiscoidal (in cyanobacteria) or hemiellipsoidal (in red algae). Species lacking phycoerythrin have at least two disks of phycocyanin per rod, which is sufficient for maximum photosynthesis. [ 6 ]
The phycobiliproteins themselves show little sequence evolution due to their highly constrained function (absorption and transfer of specific wavelengths). [ citation needed ] In some species of cyanobacteria, when both phycocyanin and phycoerythrin is present, the phycobilisome can undergo significant restructuring as response to light color. In green light the distal portions of the rods are made of red colored phycoerythrin, which absorbs green light better. In red light, this is replaced by blue colored phycocyanin, which absorbs red light better. This reversible process is known as complementary chromatic adaptation. It is the component of photosynthetic system of cyanobacteria, as a particle with which various structures are linked (i.e. thylakoid membrane, etc.). [ citation needed ]
Phycobilisomes can be used in prompt fluorescence Archived 2018-03-18 at the Wayback Machine , [ 7 ] [ 8 ] flow cytometry , [ 9 ] Western blotting and protein microarrays . Some phycobilisomes have an absorption and emission profile similar to Cy5 , allowing them to be used in many of the same applications. They can also be up to 200 times brighter and with a larger Stokes shift , providing a larger signal per binding event. This property allows the detection of low-level target molecules [ 9 ] or rare events. | https://en.wikipedia.org/wiki/Phycobilisome |
Phycoerythrobilin is a red phycobilin , i.e. an open tetrapyrrole chromophore [ 1 ] found in cyanobacteria and in the chloroplasts of red algae , glaucophytes and some cryptomonads . Phycoerythrobilin is present in the phycobiliprotein phycoerythrin , of which it is the terminal acceptor of energy. The amount of phycoerythrobilin in phycoerythrins varies a lot, depending on the considered organism. In some Rhodophytes and oceanic cyanobacteria, phycoerythrobilin is also present in the phycocyanin , then termed R-phycocyanin. Like all phycobilins, phycoerythrobilin is covalently linked to these phycobiliproteins by a thioether bond.
This photosynthesis article is a stub . You can help Wikipedia by expanding it .
This dye -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Phycoerythrobilin |
Phycoerythrocyanin is a kind of phycobiliprotein , magenta chromoprotein involved in photosynthesis of some Cyanobacteria . [ 1 ] This chromoprotein consists of alpha- and beta-subunits, generally aggregated as hexamer. Alpha-phycoerythrocyanin contains a phycoviolobilin, a violet bilin, that covalently attached at Cys-84, and beta-phycoerythrocyanin contains two phycocyanobilins, a blue bilin, that covalently attached at Cys-84 and -155, respectively. Phycoerythrocyanin is similar to phycocyanin , an important component of the light-harvesting complex ( phycobilisome ) of cyanobacteria and red algae.
While only phycocyanobilin is covalently bound to phycocyanin , leading to an absorption maximum around 620 nm, phycoerythrocyanin containing both phycoviolobilin and phycocyanobilin leads to an absorption maximum around 575 nm. As both phycoerythrocyanin and phycocyanin have phycocyanobilin acting as the terminal acceptor of energy transfer, they fluoresce around 635 nm, which is absorbed by allophycocyanins that have maximal absorption around 650 nm and maximal fluorescence around 670 nm. Finally, the light energy absorbed by phycoerythrocyanin is transferred to photosynthetic reaction center.
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Phycoerythrocyanin |
Phycotechnology refers to the technological applications of algae , both microalgae and macroalgae . [ 1 ]
Currently micro-algae are being exploited for environmental protection as the species of Chlorella, Chlamydomonas , and Scenedesmus carry out selective uptake, accumulation and biodegradation of pollutants and thus help in remediation. They are used in biological reclamation of sewage since they can immobilize heavy metals from aquatic systems.
Microalgae can be used as biocontrol agents like 'Insect' a commercial bio-insecticide sold in USA, prepared from the dead biomass of diatom frustules .
Algae are an excellent feed stock for green fuel as they are used for the production of biodiesel , bioethanol , biogasoline , biomethanol , biobutanol , and recently biohydrogen .
Microalgae are of significant use in healthcare. Chlorellin from the green microalga Chlorella is an effective antibiotic against Gram positive and Gram-negative bacteria .
Algae is extremely useful in various fields. An example for natural phycotechnology is the converting of atmospheric nitrogen into bioaccessible nitrogenous compounds by diazotrophic cyanobacteria (blue-green algae). Species of cyanobacteria like Nostoc , Arthrospira ( Spirulina ) and Aphanizomenon are used as food and feed due to their easy digestibility and nutrient content. Species of Dunaliella provide products like glycerol , carotenoids , and proteins . Algal-produced proteins can be biofactories for the production of therapeutic substances. [ 2 ] | https://en.wikipedia.org/wiki/Phycotechnology |
Phyletic gradualism is a model of evolution which theorizes that most speciation is slow, uniform and gradual. [ 1 ] When evolution occurs in this mode, it is usually by the steady transformation of a whole species into a new one (through a process called anagenesis ). In this view no clear line of demarcation exists between an ancestral species and a descendant species, unless splitting occurs. The theory is contrasted with punctuated equilibrium .
The word phyletic derives from the Greek φυλετικός phūletikos , which conveys the meaning of a line of descent. [ 2 ] Phyletic gradualism contrasts with the theory of punctuated equilibrium , which proposes that most evolution occurs isolated in rare episodes of rapid evolution, when a single species splits into two distinct species, followed by a long period of stasis or non-change. These models both contrast with variable-speed evolution ("variable speedism"), which maintains that different species evolve at different rates, and that there is no reason to stress one rate of change over another. [ 3 ] [ 4 ]
Evolutionary biologist Richard Dawkins argues that constant-rate gradualism is not present in the professional literature, thereby the term serves only as a straw-man for punctuated-equilibrium advocates. In his book The Blind Watchmaker , Dawkins observes that Charles Darwin himself was not a constant-rate gradualist, as suggested by Niles Eldredge and Stephen Jay Gould . In the first edition of On the Origin of Species , Darwin stated that "Species of different genera and classes have not changed at the same rate, or in the same degree. In the oldest tertiary beds a few living shells may still be found in the midst of a multitude of extinct forms... The Silurian Lingula differs but little from the living species of this genus". [ 5 ]
Lingula is among the few brachiopods surviving today but also known from fossils over 500 million years old. In the fifth edition of The Origin of Species , Darwin wrote that "the periods during which species have undergone modification, though long as measured in years, have probably been short in comparison with the periods during which they retain the same form". [ 6 ] | https://en.wikipedia.org/wiki/Phyletic_gradualism |
Phyllocladane is a tricyclic diterpane which is generally found in gymnosperm resins. [ 1 ] It has a formula of C 20 H 34 and a molecular weight of 274.4840. [ 2 ] As a biomarker , it can be used to learn about the gymnosperm input into a hydrocarbon deposit, and about the age of the deposit in general. It indicates a terrogenous origin of the source rock. Diterpanes, such as Phyllocladane are found in source rocks as early as the middle and late Devonian periods, which indicates any rock containing them must be no more than approximately 360 Ma. Phyllocladane is commonly found in lignite , and like other resinites derived from gymnosperms, is naturally enriched in 13 C. This enrichment is a result of the enzymatic pathways used to synthesize the compound. [ 1 ]
The compound can be identified by GC-MS . A peak of m/z 123 is indicative of tricyclic diterpenoids in general, and phyllocladane in particular is further characterized by strong peaks at m/z 231 and m/z 189. Presence of phyllocladane and its relative abundance to other tricyclic diterpanes can be used to differentiate between various oil fields. [ 1 ]
This article about an organic compound is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Phyllocladane |
In microbiology , the phyllosphere is the total above-ground surface of a plant when viewed as a habitat for microorganisms . [ 1 ] [ 2 ] [ 3 ] The phyllosphere can be further subdivided into the caulosphere (stems), phylloplane (leaves), anthosphere (flowers), and carposphere (fruits). The below-ground microbial habitats (i.e. the thin-volume of soil surrounding root or subterranean stem surfaces) are referred to as the rhizosphere and laimosphere .
Most plants host diverse communities of microorganisms including bacteria , fungi , archaea , and protists . Some are beneficial to the plant, while others function as plant pathogens and may damage the host plant or even kill it.
The leaf surface, or phyllosphere, harbours a microbiome comprising diverse communities of bacteria , archaea , fungi , algae and viruses . [ 4 ] [ 5 ] Microbial colonizers are subjected to diurnal and seasonal fluctuations of heat, moisture, and radiation. In addition, these environmental elements affect plant physiology (such as photosynthesis, respiration, water uptake etc.) and indirectly influence microbiome composition. [ 6 ] Rain and wind also cause temporal variation to the phyllosphere microbiome. [ 7 ]
The phyllosphere includes the total aerial (above-ground) surface of a plant, and as such includes the surface of the stem, flowers and fruit, but most particularly the leaf surfaces. Compared with the rhizosphere and the endosphere the phyllosphere is nutrient poor and its environment more dynamic.
Interactions between plants and their associated microorganisms in many of these microbiomes can play pivotal roles in host plant health, function, and evolution. [ 8 ] Interactions between the host plant and phyllosphere bacteria have the potential to drive various aspects of host plant physiology. [ 9 ] [ 2 ] [ 10 ] However, as of 2020 knowledge of these bacterial associations in the phyllosphere remains relatively modest, and there is a need to advance fundamental knowledge of phyllosphere microbiome dynamics. [ 11 ] [ 12 ]
The assembly of the phyllosphere microbiome, which can be strictly defined as epiphytic bacterial communities on the leaf surface, can be shaped by the microbial communities present in the surrounding environment (i.e., stochastic colonisation ) and the host plant (i.e., biotic selection). [ 4 ] [ 13 ] [ 12 ] However, although the leaf surface is generally considered a discrete microbial habitat, [ 14 ] [ 15 ] there is no consensus on the dominant driver of community assembly across phyllosphere microbiomes. For example, host-specific bacterial communities have been reported in the phyllosphere of co-occurring plant species, suggesting a dominant role of host selection. [ 15 ] [ 16 ] [ 17 ] [ 12 ]
Conversely, microbiomes of the surrounding environment have also been reported to be the primary determinant of phyllosphere community composition. [ 14 ] [ 18 ] [ 19 ] [ 20 ] As a result, the processes that drive phyllosphere community assembly are not well understood but unlikely to be universal across plant species. However, the existing evidence does indicate that phyllosphere microbiomes exhibiting host-specific associations are more likely to interact with the host than those primarily recruited from the surrounding environment. [ 9 ] [ 21 ] [ 22 ] [ 23 ] [ 12 ]
Overall, there remains high species richness in phyllosphere communities. Fungal communities are highly variable in the phyllosphere of temperate regions and are more diverse than in tropical regions. [ 25 ] There can be up to 10 7 microbes per square centimetre present on the leaf surfaces of plants, and the bacterial population of the phyllosphere on a global scale is estimated to be 10 26 cells. [ 26 ] The population size of the fungal phyllosphere is likely to be smaller. [ 27 ]
Phyllosphere microbes from different plants appear to be somewhat similar at high levels of taxa, but at the lower levels taxa there remain significant differences. This indicates microorganisms may need finely tuned metabolic adjustment to survive in phyllosphere environment. [ 25 ] Pseudomonadota seems to be the dominant colonizers, with Bacteroidota and Actinomycetota also predominant in phyllospheres. [ 28 ] Although there are similarities between the rhizosphere and soil microbial communities, very little similarity has been found between phyllosphere communities and microorganisms floating in open air ( aeroplankton ). [ 29 ] [ 6 ]
The search for a core microbiome in host-associated microbial communities is a useful first step in trying to understand the interactions that may be occurring between a host and its microbiome. [ 30 ] [ 31 ] The prevailing core microbiome concept is built on the notion that the persistence of a taxon across the spatiotemporal boundaries of an ecological niche is directly reflective of its functional importance within the niche it occupies; it therefore provides a framework for identifying functionally critical microorganisms that consistently associate with a host species. [ 30 ] [ 32 ] [ 33 ] [ 12 ]
Divergent definitions of “core microbiome” have arisen across scientific literature with researchers variably identifying “core taxa” as those persistent across distinct host microhabitats [ 35 ] [ 36 ] and even different species. [ 17 ] [ 21 ] Given the functional divergence of microorganisms across different host species [ 17 ] and microhabitats, [ 37 ] defining core taxa sensu stricto as those persistent across broad geographic distances within tissue- and species-specific host microbiomes, represents the most biologically and ecologically appropriate application of this conceptual framework. [ 38 ] [ 12 ] Tissue- and species-specific core microbiomes across host populations separated by broad geographical distances have not been widely reported for the phyllosphere using the stringent definition established by Ruinen. [ 2 ] [ 12 ]
The flowering tea tree commonly known as manuka is indigenous to New Zealand. [ 39 ] Manuka honey , produced from the nectar of manuka flowers, is known for its non-peroxide antibacterial properties. [ 40 ] [ 41 ] These non-peroxide antibacterial properties have been principally linked to the accumulation of the three-carbon sugar dihydroxyacetone (DHA) in the nectar of the manuka flower, which undergoes a chemical conversion to methylglyoxal (MGO) in mature honey. [ 42 ] [ 43 ] [ 44 ] However, the concentration of DHA in the nectar of manuka flowers is notoriously variable, and the antimicrobial efficacy of manuka honey consequently varies from region to region and from year to year. [ 45 ] [ 46 ] [ 47 ] Despite extensive research efforts, no reliable correlation has been identified between DHA production and climatic, [ 48 ] edaphic , [ 49 ] or host genetic factors. [ 50 ] [ 12 ]
Microorganisms have been studied in the manuka rhizosphere and endosphere. [ 51 ] [ 52 ] [ 53 ] Earlier studies primarily focussed on fungi, and a 2016 study provided the first investigation of endophytic bacterial communities from three geographically and environmentally distinct manuka populations using fingerprinting techniques and revealed tissue-specific core endomicrobiomes. [ 54 ] [ 12 ] A 2020 study identified a habitat-specific and relatively abundant core microbiome in the manuka phyllosphere, which was persistent across all samples. In contrast, non-core phyllosphere microorganisms exhibited significant variation across individual host trees and populations that was strongly driven by environmental and spatial factors. The results demonstrated the existence of a dominant and ubiquitous core microbiome in the phyllosphere of manuka. [ 12 ] | https://en.wikipedia.org/wiki/Phyllosphere |
Phylogenesis (from Greek φῦλον phylon "tribe" + γένεσις genesis "origin") is the biological process by which a taxon (of any rank ) appears. The science that studies these processes is called phylogenetics . [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ]
These terms may be confused with the term phylogenetics , the application of molecular - analytical methods (i.e. molecular biology and genomics ), in the explanation of phylogeny and its research.
Phylogenetic relationships are discovered through phylogenetic inference methods that evaluate observed heritable traits, such as DNA sequences or overall morpho - anatomical , ethological , and other characteristics.
The result of these analyses is a phylogeny (also known as a phylogenetic tree ) – a diagrammatic hypothesis about the history of the evolutionary relationships of a group of organisms. [ 6 ] Phylogenetic analyses have become central to understanding biodiversity, evolution, ecological genetics and genomes .
Cladistics ( Greek κλάδος , klados , i.e. "branch") [ 7 ] is an approach to biological classification in which organisms are categorized based on shared, derived characteristics that can be traced to a group's most recent common ancestor and are not present in more distant ancestors. Therefore, members of a group are assumed to share a common history and are considered to be closely related. [ 8 ] [ 9 ] [ 10 ] [ 11 ]
The cladistic method interprets each character state transformation implied by the distribution of shared character states among taxa (or other terminals) as a potential piece of evidence for grouping. The outcome of a cladistic analysis is a cladogram – a tree -shaped diagram ( dendrogram ) [ 12 ] that is interpreted to represent the best hypothesis of phylogenetic relationships.
Although traditionally such cladograms were generated largely on the basis of morphological characteristics calculated by hand, genetic sequencing data and computational phylogenetics are now commonly used and the parsimony criterion has been abandoned by many phylogeneticists in favor of more "sophisticated" (but less parsimonious) evolutionary models of character state transformation.
Taxonomy ( Greek language τάξις , taxis = 'order', 'arrangement' + νόμος , nomos = 'law' or 'science') is the classification, identification and naming of organisms. It is usually richly informed by phylogenetics, but remains a methodologically and logically distinct discipline. [ 13 ] The degree to which taxonomies depend on phylogenies (or classification depends on evolutionary development) differs depending on the school of taxonomy: phenetics ignores phylogeny altogether, trying to represent the similarity between organisms instead; cladistics (phylogenetic systematics) tries to reproduce phylogeny in its classification.
An extension of phylogenesis to the cellular level by Jean-Jacques Kupiec [ 14 ] [ 15 ] is known as Ontophylogenesis | https://en.wikipedia.org/wiki/Phylogenesis |
Phylogenetic bracketing is a method of inference used in biological sciences . It is used to infer the likelihood of unknown traits in organisms based on their position in a phylogenetic tree. One of the main applications of phylogenetic bracketing is on extinct organisms, known only from fossils, going back to the last universal common ancestor (LUCA). The method is often used for understanding traits that do not fossilize well, such as soft tissue anatomy, physiology and behaviour . By considering the closest and second-closest well-known (usually extant ) organisms, traits can be asserted with a fair degree of certainty, though the method is extremely sensitive to problems from convergent evolution .
Extant Phylogenetic Bracketing requires that the species forming the brackets be extant. More general forms of phylogenetic bracketing do not require this and may use a mix of extant and extinct taxa to form the bracket. These more generalized forms of phylogenetic bracketing have the advantage in that they can be applied to a wider array of phylogenetic cases. However, since these forms of bracketing are also more generalized and may rely on inferring traits in extinct animals, they also offer lower explanatory power compared to the EPB.
This is a popular form of phylogenetic bracketing first introduced by Witmer in 1995. [ 1 ] It works by comparing an extinct taxon to its nearest living relatives. [ 1 ] [ 2 ] [ 3 ] For example, Tyrannosaurus , a theropod dinosaur , is bracketed by birds and crocodiles . A feature found in both birds and crocodiles would likely be present in Tyrannosaurus , such as the capability to lay an amniotic egg , whereas a feature both birds and crocodiles lack, such as hair , would probably not be present in Tyrannosaurus . Sometimes this approach is used for the reconstruction of ecological traits as well. [ 4 ]
The extant phylogenetic bracket approach allows researchers to infer traits in extinct animals with varying levels of confidence. This is referred to as the levels of inference. [ 1 ] There are three levels of inference, with each higher level indicating less confidence for the inference. [ 1 ]
Level 1 — The inference of a character that leaves a bony signature on the skeleton in both members of the extant sister groups. Example: Saying that Tyrannosaurus rex had an eyeball is a level 1 inference because both extant members of the groups encompassing Tyrannosaurus rex have eyeballs, and eyeball sockets (orbital excavations) in the skull, the homology of which is well established, and the fossils of Tyrannosaurus rex skulls have similar morphology. [ 1 ]
Level 2 — The inference of a character that leaves a signature on the skeleton of only one of the extant sister groups. [ 1 ] For example, saying that Tyrannosaurus rex had air sacs running through its skeleton is a level 2 inference as birds are the only extant sister group to Tyrannosaurus rex to show such air sacs. However the underlying pneumatic fossae, air sacs, in the bones of extant birds are remarkably similar to the cavities seen in the fossil vertebrae of Tyrannosaurus rex . The high degree of similarity between the pneumatic fossae in Tyrannosaurus rex and extant birds makes this a fairly strong inference, yet not as strong as a level 1 inference.
Level 3 — The inference of a character that leaves a bony signature on the skeleton but is not present in either extant sister group to the taxon in question. [ 1 ] For example, saying that ceratopsian dinosaurs such as Triceratops horridus had horns in life would be a level 3 inference. Neither extant crocodylians, nor extant birds have horns today, but the osteological evidence for horns in ceratopsians is without question. Thus a level 3 inference receives no support from the extant phylogenetic bracket, but can still be used with confidence based on the merits of the fossil data itself.
The Extant Phylogenetic Bracket can be used to infer the presence of soft tissues even when those tissues do not interact with the skeleton. As before, there are three different levels of inference. These levels are designated as prime levels. They descend in confidence as they move up a level. [ 1 ]
Level 1′ — The inference of a character that is shared by both extant sister groups, but does not leave behind a bony signature. [ 1 ] For example, saying that Tyrannosaurus rex had a four-chambered heart would be a level 1′ inference as both extant sister groups (Crocodylia and Aves) have four-chambered hearts, but this trait does not leave behind any bony evidence.
Level 2′ — The inference of a character that is found in only one sister group to the taxon in question and that does not leave behind any bony evidence. [ 1 ] For instance saying that Tyrannosaurus rex was warm-blooded would be a level 2′ inference as extant birds are warm-blooded but extant crocodylians are not. Further, since warm-bloodedness is a physiological trait rather than an anatomical one, it does not leave behind any bony signatures to indicate its presence.
Level 3′ — The inference of a character that is found in neither sister group to the taxon in question and that does not leave behind any bony signatures. [ 1 ] For example, saying that the large sauropod dinosaur Apatosaurus ajax gave birth to live young similar to mammals and many lizards [ 5 ] would be a level 3′ inference as neither crocodylians nor birds give birth to live young and these traits do not leave impressions on the skeleton.
In general the primes are always less confident than their underlying levels; however, the confidence between levels is less clear cut. For instance it is unclear if a level 1′ would be less confident than a level 2. The same would go for a level 2′ versus a level 3. [ 1 ]
The Late Cretaceous Kryptobaatar and the extant monotremes (family Tachyglossidae and Ornithorhynchidae) all sport extratarsal spurs on their hind feet. Greatly simplified, the phylogeny is as follows, with taxa known to have extratarsal spurs in bold: [ 6 ]
Kryptobaatar
Cimolomyidae
Eobaataridae
Ornithorhynchidae (platypus)
Tachyglossidae (echidnas)
Assuming that the Kryptobaatar and monotreme spurs are homologous, they were a feature of their mammalian last common ancestor, so we can tentatively conclude that they were present among the Early Cretaceous Eobaataridae—its descendants—as well.
A fragmentary fossil with a known phylogeny can be compared to more complete fossil specimen to give an idea about general build and habit. The body of labyrinthodonts can usually be inferred to be broad and squat with a sideways compressed tail, although only the skull has been known for many taxa, based on the shape of more well-known labyrinthodont finds.
Phylogenetic bracketing is based on the notion of anatomical conservationism. The general body shape of an animal can be fairly constant through large groups, but not always.
The large theropod dinosaur Spinosaurus was until 2014 only known from fragmentary remains, mainly of the skull and vertebrae. It was assumed that the remaining skeleton would look more or less like that of related animals like Baryonyx and Suchomimus , who sport a traditional theropod anatomy of long, strong hind legs and relatively small front legs. A 2014 find, however, included a set of hind legs. [ 7 ] The new reconstruction indicate earlier Spinosaurus reconstructions were wrong, and the animal was mainly aquatic and had relatively weak hind legs. It is possible it walked on all fours when on land, the only theropod to do so. [ 8 ] | https://en.wikipedia.org/wiki/Phylogenetic_bracketing |
The phylogenetic classification of bony fishes is a phylogenetic classification of bony fishes and is based on phylogenies inferred using molecular and genomic data for nearly 2000 fishes. [ 1 ] [ 2 ] [ 3 ] The first version was published in 2013 and resolved 66 orders. [ 2 ] The latest version (version 4) was published in 2017 and recognised 72 orders and 79 suborders. [ 3 ]
The following cladograms show the phylogeny of the Osteichthyes down to order level, with the number of families in parentheses. [ 3 ]
Coelacanthiformes (1) — coelacanths
Ceratodontiformes (3) — lungfishes
Tetrapodomorpha — tetrapods
Polypteriformes (1) — bichirs
Acipenseriformes (2) — paddlefishes and sturgeons
Amiiformes (1) — bowfins
Lepisosteiformes (1) — gars
Elopiformes — tenpounders (ladyfishes), tarpons
Anguilliformes — eels
Notacanthiformes — halosaurs and deep-sea spiny eels
Albuliformes — bonefishes
Hiodontiformes (1) — mooneyes
Osteoglossiformes (5) — bonytongues, butterflyfishes, elephantfishes, abas
Clupeiformes (5) — herrings (including anchovies, sardines, etc)
Alepocephaliformes (2) — slickheads, tubeshoulders
Gonorynchiformes (3) — milkfishes, beaked sandfishes, knerias and snake mudheads
Cypriniformes (24) — carps, loaches, minnows, and relatives
Characiformes (23) — characins (including tetras and piranhas)
Gymnotiformes (5) — Neotropical knifefishes
Siluriformes (41) — catfishes
Lepidogalaxiiformes (1) — salamanderfishes
Argentiniformes (4) — marine smelts
Galaxiiformes (1) — galaxids
Esociformes (2) — pikes, mudminnows
Salmoniformes (1) — trout, salmon, and whitefish
Osmeriformes (4) — freshwater smelts
Stomiiformes (4) — dragonfishes
Ateleopodiformes (1) — jellynose fishes
Aulopiformes (17) — lizardfishes
Myctophiformes (2) — lanternfishes, blackchins
spiny-rayed fishes (43 orders, see below)
The 43 orders of spiny-rayed fishes are related as follows:
Lampriformes (6) — opahs (velifers, opahs, crestfishes, tapertails, ribbonfishes, oarfishes)
Percopsiformes (3) — trout-perches, pirate perches, cavefishes
Zeiformes (6) — dories
Stylephoriformes (1) — tube-eyes or thread-tails
Gadiformes (12) — cods and hakes
Polymixiiformes (1) — beardfishes
Trachichthyiformes (5) — roughies
Beryciformes (8) — alfonsinos, whalefishes, etc
Holocentriformes (1) — squirrelfishes
Ophidiiformes (1) — cusk-eels
Batrachoidiformes (1) — toadfishes
Scombriformes (17) — mackerels
Syngnathiformes (10) — pipefishes and seahorses
Kurtiformes (2) — nurseryfishes and cardinalfishes
Gobiiformes (9) — gobies
Synbranchiformes (4) — swamp eels
Anabantiformes (7) — labyrinth fishes
Carangiformes * (5) — jacks
Istiophoriformes (2) — barracudas, swordfishes and billfishes
Pleuronectiformes (14) — flatfishes
incertae sedis ( Lactariidae , Menidae , Polynemidae )
Carangiformes 1 ( Nematistiidae , Rachycentridae , Coryphaenidae , Echeneidae )
incertae sedis ( Sphyraenidae )
incertae sedis ( Leptobramidae , Toxotidae )
Istiophoriformes
Carangiformes 2 ( Carangidae )
incertae sedis ( Centropomidae )
Pleuronectiformes
Cichliformes (2) — cichlids
Beloniformes (5) — needlefishes
Cyprinodontiformes (10) — killifishes
Atheriniformes (10) — silversides
Mugiliformes (1) — mullets
Gobiesociformes (1) — clingfishes
Blenniiformes (6) — blennies
Gerreiformes (1) —
Uranoscopiformes (4) —
Labriformes (1) — wrasses and relatives
Ephippiformes (2) —
Chaetodontiformes (2) — butterflyfishes and ponyfishes
Acanthuriformes (3) — surgeonfishes and relatives
Lutjaniformes (2) — snappers and grunts
Lobotiformes (3) —
Spariformes (3) — breams and porgies
Priacanthiformes (2) — bigeyes and bandfishes
Caproiformes (1) — boarfishes
Lophiiformes (18) — anglerfishes
Tetraodontiformes (10) — plectognaths
Pempheriformes (17) —
Centrarchiformes (19) —
Perciformes (61) — perches, mail-cheeked fishes (includes former Scorpaeniformes, Cottiformes, and Trachiniformes)
This article about a bony fish is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Phylogenetic_classification_of_bony_fishes |
Phylogenetic comparative methods ( PCMs ) use information on the historical relationships of lineages ( phylogenies ) to test evolutionary hypotheses. The comparative method has a long history in evolutionary biology; indeed, Charles Darwin used differences and similarities between species as a major source of evidence in The Origin of Species . However, the fact that closely related lineages share many traits and trait combinations as a result of the process of descent with modification means that lineages are not independent. This realization inspired the development of explicitly phylogenetic comparative methods. [ 1 ] Initially, these methods were primarily developed to control for phylogenetic history when testing for adaptation ; [ 2 ] however, in recent years the use of the term has broadened to include any use of phylogenies in statistical tests. [ 3 ] Although most studies that employ PCMs focus on extant organisms, many methods can also be applied to extinct taxa and can incorporate information from the fossil record . [ 4 ]
PCMs can generally be divided into two types of approaches: those that infer the evolutionary history of some character ( phenotypic or genetic ) across a phylogeny and those that infer the process of evolutionary branching itself ( diversification rates ), though there are some approaches that do both simultaneously. [ 5 ] Typically the tree that is used in conjunction with PCMs has been estimated independently (see computational phylogenetics ) such that both the relationships between lineages and the length of branches separating them is assumed to be known.
Phylogenetic comparative approaches can complement other ways of studying adaptation, such as studying natural populations, experimental studies, and mathematical models. [ 6 ] Interspecific comparisons allow researchers to assess the generality of evolutionary phenomena by considering independent evolutionary events. Such an approach is particularly useful when there is little or no variation within species. And because they can be used to explicitly model evolutionary processes occurring over very long time periods, they can provide insight into macroevolutionary questions, once the exclusive domain of paleontology . [ 4 ]
Phylogenetic comparative methods are commonly applied to such questions as:
→ Example: how does brain mass vary in relation to body mass ?
→ Example: do canids have larger hearts than felids ?
→ Example: do carnivores have larger home ranges than herbivores?
→ Example: where did endothermy evolve in the lineage that led to mammals?
→ Example: where, when, and why did placentas and viviparity evolve?
→ Example: are behavioral traits more labile during evolution?
→ Example: why do small-bodied species have shorter life spans than their larger relatives?
Felsenstein [ 1 ] proposed the first general statistical method in 1985 for incorporating phylogenetic information, i.e., the first that could use any arbitrary topology (branching order) and a specified set of branch lengths. The method is now recognized as an algorithm that implements a special case of what are termed phylogenetic generalized least-squares models. [ 8 ] The logic of the method is to use phylogenetic information (and an assumed Brownian motion like model of trait evolution) to transform the original tip data (mean values for a set of species) into values that are statistically independent and identically distributed .
The algorithm involves computing values at internal nodes as an intermediate step, but they are generally not used for inferences by themselves. An exception occurs for the basal (root) node, which can be interpreted as an estimate of the ancestral value for the entire tree (assuming that no directional evolutionary trends [e.g., Cope's rule ] have occurred) or as a phylogenetically weighted estimate of the mean for the entire set of tip species (terminal taxa). The value at the root is equivalent to that obtained from the "squared-change parsimony" algorithm and is also the maximum likelihood estimate under Brownian motion. The independent contrasts algebra can also be used to compute a standard error or confidence interval .
Probably the most commonly used PCM is phylogenetic generalized least squares (PGLS). [ 8 ] [ 9 ] This approach is used to test whether there is a relationship between two (or more) variables while accounting for the fact that lineage are not independent. The method is a special case of generalized least squares (GLS) and as such the PGLS estimator is also unbiased , consistent , efficient , and asymptotically normal . [ 10 ] In many statistical situations where GLS (or, ordinary least squares [OLS]) is used residual errors ε are assumed to be independent and identically distributed random variables that are assumed to be normal
whereas in PGLS the errors are assumed to be distributed as
where V is a matrix of expected variance and covariance of the residuals given an evolutionary model and a phylogenetic tree. Therefore, it is the structure of residuals and not the variables themselves that show phylogenetic signal . This has long been a source of confusion in the scientific literature. [ 11 ] A number of models have been proposed for the structure of V such as Brownian motion [ 8 ] Ornstein-Uhlenbeck , [ 12 ] and Pagel's λ model. [ 13 ] (When a Brownian motion model is used, PGLS is identical to the independent contrasts estimator. [ 14 ] ). In PGLS, the parameters of the evolutionary model are typically co-estimated with the regression parameters.
PGLS can only be applied to questions where the dependent variable is continuously distributed; however, the phylogenetic tree can also be incorporated into the residual distribution of generalized linear models , making it possible to generalize the approach to a broader set of distributions for the response. [ 15 ] [ 16 ] [ 17 ]
Martins and Garland [ 18 ] proposed in 1991 that one way to account for phylogenetic relations when conducting statistical analyses was to use computer simulations to create many data sets that are consistent with the null hypothesis under test (e.g., no correlation between two traits, no difference between two ecologically defined groups of species) but that mimic evolution along the relevant phylogenetic tree. If such data sets (typically 1,000 or more) are analyzed with the same statistical procedure that is used to analyze a real data set, then results for the simulated data sets can be used to create phylogenetically correct (or "PC" [ 7 ] ) null distributions of the test statistic (e.g., a correlation coefficient, t, F). Such simulation approaches can also be combined with such methods as phylogenetically independent contrasts or PGLS (see above). | https://en.wikipedia.org/wiki/Phylogenetic_comparative_methods |
Phylogenetic footprinting is a technique used to identify transcription factor binding sites (TFBS) within a non-coding region of DNA of interest by comparing it to the orthologous sequence in different species . When this technique is used with a large number of closely related species, this is called phylogenetic shadowing . [ 1 ]
Researchers have found that non-coding pieces of DNA contain binding sites for regulatory proteins that govern the spatiotemporal expression of genes . These transcription factor binding sites (TFBS), or regulatory motifs, have proven hard to identify, primarily because they are short in length, and can show sequence variation. The importance of understanding transcriptional regulation to many fields of biology has led researchers to develop strategies for predicting the presence of TFBS, many of which have led to publicly available databases. One such technique is Phylogenetic Footprinting .
Phylogenetic footprinting relies upon two major concepts:
Phylogenetic footprinting was first used and published by Tagle et al. in 1988, which allowed researchers to predict evolutionary conserved cis-regulatory elements responsible for embryonic ε and γ globulin gene expression in primates. [ 3 ]
Before phylogenetic footprinting, DNase footprinting was used, where protein would be bound to DNA transcription factor binding sites (TFBS) protecting it from DNase digestion. One of the problems with this technique was the amount of time and labor it would take. Unlike DNase footprinting, phylogenetic footprinting relies on evolutionary constraints within the genome, with the "important" parts of the sequence being conserved among the different species. [ 4 ]
It is important when using this technique to decide which genome your sequence should be aligned to. More divergent species will have less sequence similarity between orthologous genes. Therefore, the key is to pick species that are related enough to detect homology, but divergent enough to maximize non-alignment "noise".
Step wise approach to Phylogenetic footprinting consists of :
Not all transcription binding sites can be found using phylogenetic footprinting due to the statistical nature of this technique. Here are several reasons why some TFBS are not found:
Some binding sites seem to have no significant matches in most other species. Therefore, detecting these sites by phylogenetic footprinting is likely impossible unless a large number of closely related species are available.
Some binding sites show excellent conservation, but just in a shorter region than the ones were looked for. Such short motifs (e.g., GC-box) often occur by chance in nonfunctional sequences and detecting these motifs can be challenging.
Some binding sites show some conservation but have had insertions or deletions. It is not obvious if these sequences with insertions or deletions are still functional. Though they may still be functional if the binding factor is less specific (or less 'picky' if you will). Because deletions and insertions are rare in binding sites, considering insertions and deletions in the sequence would detect a few more true TFBSs, but it could likely include many more false positives.
Some motifs are quite well conserved, but they are statistically insignificant in a specific dataset. The motif might have appeared in different species by chance. These motifs could be detected if sequences from more organisms are available. So this will be less of a problem in the future.
Some transcription factors bind as dimers. Therefore, their binding sites may consist of two conserved regions, separated by a few variable nucleotides. Because of the variable internal sequence, the motif cannot be detected. However, if we could use a program to search for motifs containing a variable sequence in the middle, without counting mutations, these motifs could be discovered.
It is important to keep in mind that not all conserved sequences are under selection pressure. To eliminate false positives statistical analysis must be performed that will show that the motifs reported have a mutation rate meaningfully less than that of the surrounding nonfunctional sequence.
Moreover, results could be more accurate if the prior knowledge about the sequence is considered. For example, some regulatory elements
are repeated 15 times in a promoter region (e.g., some metallothionein promoters have up to 15 metal response elements (MREs)). Thus, to
eliminate false motifs with inconsistent order across species, the orientation and order of regulatory elements in a promoter region should
be the same in all species. This type of information could help us to identify regulatory elements that are not adequately conserved but
occur in several copies in the input sequence. [ 5 ] | https://en.wikipedia.org/wiki/Phylogenetic_footprinting |
Phylogenetic inertia or phylogenetic constraint refers to the limitations on the future evolutionary pathways that have been imposed by previous adaptations . [ 1 ]
Charles Darwin first recognized this phenomenon, though the term was later coined by Huber in 1939. [ 2 ] Darwin explained the idea of phylogenetic inertia based on his observations; he spoke about it when explaining the "Law of Conditions of Existence". [ 3 ] Darwin also suggested that, after speciation , the organisms do not start over from scratch, but have characteristics that are built upon already existing ones that were inherited from their ancestors; and these characteristics likely limit the amount of evolution seen in that new taxa . [ 4 ] This is the main concept of phylogenetic inertia.
Richard Dawkins also explained these constraints by likening natural selection to a river in his 1982 book The Extended Phenotype . [ 5 ]
Birds are the only speciose group of vertebrates that are exclusively oviparous , or egg laying. It has been suggested that birds are phylogenetically constrained, as being derived from reptiles , and likely have not overcome this constraint or diverged far enough away to develop viviparity , or live birth. [ 8 ] [ 9 ]
There have been several studies that have been able to effectively test for phylogenetic inertia when looking into shared traits; predominantly with a comparative methods approach. [ 11 ] [ 12 ] [ 13 ] Some have used comparative methods and found evidence for certain traits attributed to adaptation, and some to phylogeny; there were also numerous traits that could be attributed to both. [ 12 ] Another study developed a new method of comparative examination that showed to be a powerful predictor of phylogenetic inertia in a variety of situations. It was called Phylogenetic Eigenvector Regression (PVR), which runs principal component analyses between species on a pairwise phylogenetic distance matrix . [ 11 ] In another, different study, the authors described methods for measuring phylogenetic inertia, looked at effectiveness of various comparative methods, and found that different methods can reveal different aspects of drivers. Autoregression and PVR showed good results with morphological traits. [ 13 ] | https://en.wikipedia.org/wiki/Phylogenetic_inertia |
In molecular phylogenetics , relationships among individuals are determined using character traits, such as DNA , RNA or protein , which may be obtained using a variety of sequencing technologies. High-throughput next-generation sequencing has become a popular technique in transcriptomics , which represent a snapshot of gene expression. In eukaryotes , making phylogenetic inferences using RNA is complicated by alternative splicing , which produces multiple transcripts from a single gene . As such, a variety of approaches may be used to improve phylogenetic inference using transcriptomic data obtained from RNA-Seq and processed using computational phylogenetics .
There have been several transcriptomics technologies used to gather sequence information on transcriptomes . However the most widely used is RNA-Seq .
RNA reads may be obtained using a variety of RNA-seq methods.
There are a number of public databases that contain freely available RNA-Seq data.
RNA-Seq data may be directly assembled into transcripts using sequence assembly .
Two main categories of sequence assembly are often distinguished:
Both methods attempt to generate biologically representative isoform-level constructs from RNA-seq data and generally attempt to associate isoforms with a gene-level construct. However, proper identification of gene-level constructs may be complicated by recent duplications , paralogs , alternative splicing or gene fusions . These complications may also cause downstream issues during ortholog inference. When selecting or generating sequence data, it is also vital to consider the tissue type, developmental stage and environmental conditions of the organisms. Since the transcriptome represents a snapshot of gene expression , minor changes to these conditions may significantly affect which transcripts are expressed. This may detrimentally affect downstream ortholog detection. [ 1 ]
RNA may also be acquired from public databases, such as GenBank , RefSeq , 1000 Plants (1KP) and 1KITE . Public databases potentially offer curated sequences which can improve inference quality and avoid the computational overhead associated with sequence assembly .
Orthology or paralogy inference requires an assessment of sequence homology , usually via sequence alignment . Phylogenetic analyses and sequence alignment are often considered jointly, as phylogenetic analyses using DNA or RNA require sequence alignment and alignments themselves often represent some hypothesis of homology . As proper ortholog identification is pivotal to phylogenetic analyses, there are a variety of methods available to infer orthologs and paralogs . [ 2 ]
These methods are generally distinguished as either graph-based algorithms or tree-based algorithms. Some examples of graph-based methods include InParanoid, [ 3 ] MultiParanoid, [ 4 ] OrthoMCL, [ 5 ] HomoloGene [ 6 ] and OMA. [ 7 ] Tree-based algorithms include programs such as OrthologID or RIO. [ 8 ] [ 2 ]
A variety of BLAST methods are often used to detect orthologs between species as a part of graph-based algorithms, such as MegaBLAST, BLASTALL, or other forms of all-versus-all BLAST and may be nucleotide - or protein -based alignments . [ 9 ] [ 10 ] RevTrans [ 11 ] will even use protein data to inform DNA alignments, which can be beneficial for resolving more distant phylogenetic relationships. These approaches often assume that best-reciprocal-hits passing some threshold metric(s), such as identity, E-value, or percent alignment, represent orthologs and may be confounded by incomplete lineage sorting . [ 12 ] [ 13 ]
It is important to note that orthology relationships in public databases typically represent gene-level orthology and do not provide information concerning conserved alternative splice variants .
Databases that contain and/or detect orthologous relationships include:
As eukaryotic transcription is a complex process by which multiple transcripts may be generated from a single gene through alternative splicing with variable expression , the utilization of RNA is more complicated than DNA. However, transcriptomes are cheaper to sequence than complete genomes and may be obtained without the use of a pre-existing reference genome . [ 1 ]
It is not uncommon to translate RNA sequence into protein sequence when using transcriptomic data, especially when analyzing highly diverged taxa. This is an intuitive step as many (but not all) transcripts are expected to code for protein isoforms . Potential benefits include the reduction of mutational biases and a reduced number of characters, which may speed analyses. However, this reduction in characters may also result in the loss of potentially informative characters. [ 1 ]
There are a number of tools available for multiple sequence alignment . All of which possess their own strengths and weaknesses and may be specialized for distinct sequence types (DNA, RNA or protein). As such, a splice-aware aligner may be ideal for aligning RNA sequences, whereas an aligner that considers protein structure or residue substitution rates may be preferable for translated RNA sequence data.
Using RNA for phylogenetic analysis comes with its own unique set of strengths and weaknesses. | https://en.wikipedia.org/wiki/Phylogenetic_inference_using_transcriptomic_data |
Phylogenetic invariants [ 1 ] are polynomial relationships between the frequencies of various site patterns in an idealized DNA multiple sequence alignment . They have received substantial study in the field of biomathematics , and they can be used to choose among phylogenetic tree topologies in an empirical setting. The primary advantage of phylogenetic invariants relative to other methods of phylogenetic estimation like maximum likelihood or Bayesian MCMC analyses is that invariants can yield information about the tree without requiring the estimation of branch lengths of model parameters. The idea of using phylogenetic invariants was introduced independently by James Cavender and Joseph Felsenstein [ 2 ] and by James A. Lake [ 3 ] in 1987.
At this point the number of programs that allow empirical datasets to be analyzed using invariants is limited. However, phylogenetic invariants may provide solutions to other problems in phylogenetics and they represent an area of active research for that reason. Felsenstein [ 4 ] stated it best when he said, "invariants are worth attention, not for what they do for us now, but what they might lead to in the future." (p. 390)
If we consider a DNA (or RNA ) multiple sequence alignment with t taxa and no gaps or missing data (i.e., an idealized multiple sequence alignment ), there are 4 t possible site patterns. For example, there are 256 possible site patterns for four taxa ( f AAAA , f AAAC , f AAAG , ... f TTTT ), which can be written as a vector. This site pattern frequency vector has 255 degrees of freedom because the frequencies must sum to one. However, any set of site pattern frequencies that resulted from some specific process of sequence evolution on a specific tree must obey many constraints. and therefore have many fewer degrees of freedom. Thus, there should be polynomials involving those frequencies that take on a value of zero if the DNA sequences were generated on a specific tree given a particular substitution model .
Invariants are formulas in the expected pattern frequencies, not the observed pattern frequencies. When they are computed using the observed pattern frequencies, we will usually find that they are not precisely zero even when the model and tree topology are correct. By testing whether such polynomials for various trees are 'nearly zero' when evaluated on the observed frequencies of patterns in real data sequences one should be able infer which tree best explains the data.
Some invariants are straightforward consequences of symmetries in the model of nucleotide substitution and they will take on a value of zero regardless of the underlying tree topology. For example, if we assume the Jukes-Cantor model of sequence evolution and a four-taxon tree we expect:
f A C A T − f C G C A = 0 {\displaystyle f_{ACAT}-f_{CGCA}=0}
This is a simple outgrowth of the fact that base frequencies are constrained to be equal under the Jukes-Cantor model. Thus, they are called symmetry invariants . The equation shown above is only one of a large number of symmetry invariants for the Jukes-Cantor model; in fact, there are a total of 241 symmetry invariants for that model.
Symmetry invariants are non-phylogenetic in nature; they take on the expected value of zero regardless of the tree topology. However, it is possible to determine whether a particular multiple sequence alignment fits the Jukes-Cantor model of evolution (i.e., by testing whether the site patterns of the appropriate types are present in equal numbers). More general tests for the best-fitting model using invariants are also possible. For example, Kedzierska et al. 2012 [ 5 ] used invariants to establish the best-fitting model out from a specific model set.
The asterisk after the JC69, K80, and K81 models is used to emphasize the non-homogeneous nature of the models that can be examined using invariants. These non-homogeneous models include the commonly used continuous-time JC69, K80, and K81 models as submodels. The SSM (strand-specific model [ 6 ] ), also called the CS05 [ 7 ] model, is a generalized non-homogeneous version of the HKY (Hasegawa-Kishino-Yano) model [ 8 ] constrained to have equal distribution of the pairs of bases A,T and C,G at each node of the tree and no assumption regarding a stable base distribution. All models listed above are submodels of the general Markov model [ 9 ] (GMM). The ability to perform tests using non-homogeneous models represents a major benefit of the invariants methods relative to the more commonly used maximum likelihood methods for phylogenetic model testing.
Phylogenetic invariants , which are defined as the subset of invariants that take on a value of zero only when the sequences were (or were not) generated on a specific topology, are likely to be the most useful invariants for phylogenetic studies.
Lake's invariants [ 3 ] (which he called "evolutionary parsimony") provide an excellent example of phylogenetic invariants. Lake's invariants involve quartets, two of which (the incorrect topologies) yield values of zero and one of which yields a value greater than zero. This can be used to construct a test based on following invariant relationship, which holds for the two incorrect trees when sites evolve under the Kimura two-parameter model of sequence evolution:
f 1133 + f 1234 = f 1233 + f 1134 {\displaystyle f_{1133}+f_{1234}=f_{1233}+f_{1134}}
The indices of these site pattern frequencies indicate the bases scored relative to the base in the first taxon (which we call taxon A). If base 1 is a purine , then base 2 is the other purine and bases 3 and 4 are the pyrimidines . If base 1 is a pyrimidine, then base 2 is the other pyrimidine and. bases 3 and 4 are the purines.
We will call three possible quartet trees T X [in newick format T X is (A,B,(C,D));], T Y [T Y is (A,C,(B,D));], and T Z [T Z is (A,D,(B,C));]. We can calculate three values from the data to identify the best topology given the data:
X = N 1133 − N 1233 − N 1134 + N 1234 {\displaystyle X=N_{1133}-N_{1233}-N_{1134}+N_{1234}}
Y = N 1313 − N 1323 − N 1314 + N 1324 {\displaystyle Y=N_{1313}-N_{1323}-N_{1314}+N_{1324}}
Z = N 1331 − N 1332 − N 1341 + N 1342 {\displaystyle Z=N_{1331}-N_{1332}-N_{1341}+N_{1342}}
Lake broke these values up into a "parsimony-like term" ( P = N 1133 + N 1234 {\displaystyle P=N_{1133}+N_{1234}} for T X ) the "background term" ( B = N 1233 + N 1134 {\displaystyle B=N_{1233}+N_{1134}} for T X ) and suggests testing for deviation from zero by calculating χ 2 = ( P − B ) 2 / ( P + B ) {\displaystyle \chi ^{2}=(P-B)^{2}/(P+B)} and performing a χ 2 test with one degree of freedom . Similar χ 2 tests can be performed for Y and Z. If one of the three values is significantly different from zero the corresponding topology is the best estimate of phylogeny. The advantage of using Lake's invariants relative to maximum likelihood or neighbor joining of Kimura two-parameter distances is that the invariants should hold regardless of the model parameters, branch lengths, or patterns of among-sites rate heterogeneity.
A classic study by John Huelsenbeck and David Hillis [ 10 ] found that Lake's invariants converges on the true tree over all of the branch length space they examined when the Kimura two-parameter (K80) model [ 11 ] ist he underlying model of evolution. However, they also found that Lake's invariants are very inefficient (large amounts of data are necessary to converge on the correct tree). This inefficiency has caused most empiricists to abandon the use of Lake's invariants. Also, because Lake's invariants are based on the K80 model phylogenetic estimation using Lake's invariants may not yield the true tree when the model that generated the data strongly violates that model.
The low efficiency of Lake's invariants reflects the fact that it used a limited set of generators for the phylogenetic invariants. Casanellas et al. [ 12 ] introduced methods to derive a much larger set of set of generators for DNA data and this has led to the development of invariants methods that are as efficient as maximum likelihood methods. [ 13 ] Several of these methods have implementations that are practical for analyses of empirical datasets.
"Squangles" ( s tochastic qu artet t angles [ 14 ] ) are another example of a modern invariants method [ 15 ] and it has been implemented in software package that is practical to be used with empirical datasets. Squangles permit the choice among the three possible quartets assuming that DNA sequences have evolved under the general Markov model ; the quartets can then be assembled using a supertree method. There are three squangles that are useful for differentiating among quartets, which can be denoted as q 1 (f), q 2 (f), and q 3 (f) (f is a 256 element vector containing the site frequency spectrum). Each q has 66,744 terms and together they satisfy the linear relation q 1 + q 2 + q 3 = 0 (i.e., up to linear dependence there are only two q values). Each possible quartet has different expected values for q 1 , q 2 , and q 3 :
(newick format)
The expected values q 1 , q 2 , and q 3 are all zero on the star topology (a quartet with an internal branch length of zero). For practicality, Holland et al. [ 15 ] used least squares to solve for the q values. Empirical tests of the squangles method have been limited [ 15 ] [ 16 ] but they appear to be promising.
Another important class of modern invariants methods is based on the use of singular value decomposition (SVD) to examine the rank of matrices corresponding to flattenings of a tensor with the site pattern frequency spectrum. The flattening matrices for a four taxon tree are constructed by arranging the nucleotide site pattern frequencies into a 16 x 16 matrix based on the bipartition in the tree. For example, using the T1 topology that was defined above as (A,B,(C,D)); we use the bipartition AB|CD and the elements of the flattening matrix would correspond to frequencies of the site pattern frequencies in the order ABCD. As shown below the rows are indexed using to the states for the first two taxa (A and B in the case of T1) and the columns reflect the states in the second pair of taxa (C and D in the case of T1).
F l a t T 1 = [ p A A A A p A A A C p A A A G p A A A T p A A C A ⋯ p A A T T p A C A A p A C A C p A C A G p A C A T p A C C A ⋯ p A C T T p A G A A p A G A C p A G A G p A G A T p A G C A ⋯ p A G T T p A T A A p A T A C p A T A G p A T A T p A T C A ⋯ p A T T T p C A A A p C A A C p C A A G p C A A T p C A C A ⋯ p C A T T ⋮ ⋮ ⋮ ⋮ ⋮ ⋱ ⋮ p T T A A p T T A C p T T A G p T T A T p T T C A ⋯ p T T T T ] {\displaystyle \mathbf {Flat_{T1}} ={\begin{bmatrix}p_{AAAA}&p_{AAAC}&p_{AAAG}&p_{AAAT}&p_{AACA}&\cdots &p_{AATT}\\p_{ACAA}&p_{ACAC}&p_{ACAG}&p_{ACAT}&p_{ACCA}&\cdots &p_{ACTT}\\p_{AGAA}&p_{AGAC}&p_{AGAG}&p_{AGAT}&p_{AGCA}&\cdots &p_{AGTT}\\p_{ATAA}&p_{ATAC}&p_{ATAG}&p_{ATAT}&p_{ATCA}&\cdots &p_{ATTT}\\p_{CAAA}&p_{CAAC}&p_{CAAG}&p_{CAAT}&p_{CACA}&\cdots &p_{CATT}\\\vdots &\vdots &\vdots &\vdots &\vdots &\ddots &\vdots \\p_{TTAA}&p_{TTAC}&p_{TTAG}&p_{TTAT}&p_{TTCA}&\cdots &p_{TTTT}\\\end{bmatrix}}}
The other possible trees can be constructed by permuting the taxa; for example, in the case of T2 (A,C,(B,D)); (i.e., the bipartition AC|BD) the rows would be defined by taxa A and C and the columns would be defined by taxa B and D.
This approach was pioneered by Eriksson, [ 17 ] where the SVD of flattening matrices was proposed along with a novel tree construction algorithm. However, performance of the original Eriksson method (ErikSVD) was mixed when it was compared to neighbor-joining and the maximum likelihood approach implemented in the PHYLIP program dnaml. ErikSVD appeared to perform better than dnaml when applied to an empirical mammalian dataset based on an early release of data from the ENCODE project but it underperformed the other phylogenetic methods when it was used with simulated data. Fernández-Sánchez and Casanellas, [ 18 ] proposed a normalization (Erik+2) that improved the original ErikSVD method. ErikSVD is statistically consistent given the general Markov model for sequence evolution (it converges on. the true tree. as the empirical distribution approaches the theoretical distribution) but the Erik+2 normalization exhibited improved performance given finite datasets. ErikSVD and Erik+2 has been implemented in the software package PAUP* as part of the SVDquartets method. [ 19 ]
Scores for the alternative topologies are calculated using the singular values, as shown below:
δ n ( T 1 ) = ∑ i = n + 1 N σ i ^ 2 {\displaystyle \delta _{n}(T_{1})={\sqrt {\sum _{i=n+1}^{N}{\widehat {\sigma _{i}}}^{2}}}}
where σ i ^ {\displaystyle {\widehat {\sigma _{i}}}} is the i t h {\displaystyle i^{th}} singular of the flattening matrix for the appropriate topology. The δ n {\displaystyle \delta _{n}} values provide information about the rank of the flattening matrix; if the sequences were generated on a single tree topology then δ 4 {\displaystyle \delta _{4}} will be zero in expectation when the flattening matrix corresponds to the correct topology [ 1 ] (i.e., the rank of the flattening matrix with be 4 at most). When the sequences were generated on a mixture of trees related by the multispecies coalescent we expect the rank of the flattening matrix for the correct tree to be no more than 10. [ 19 ] This is the basis for the popular SVDquartets method. | https://en.wikipedia.org/wiki/Phylogenetic_invariants |
A phylogenetic network is any graph used to visualize evolutionary relationships (either abstractly or explicitly) [ 1 ] between nucleotide sequences , genes , chromosomes , genomes , or species . [ 2 ] They are employed when reticulation events such as hybridization , horizontal gene transfer , recombination , or gene duplication and loss are believed to be involved. They differ from phylogenetic trees by the explicit modeling of richly linked networks, by means of the addition of hybrid nodes (nodes with two parents) instead of only tree nodes (a hierarchy of nodes, each with only one parent). [ 3 ] Phylogenetic trees are a subset of phylogenetic networks. Phylogenetic networks can be inferred and visualised with software such as SplitsTree , [ 4 ] the R-package, phangorn, [ 5 ] [ 6 ] and, more recently, Dendroscope . A standard format for representing phylogenetic networks is a variant of Newick format which is extended to support networks as well as trees. [ 7 ]
Many kinds and subclasses of phylogenetic networks have been defined based on the biological phenomenon they represent or which data they are built from (hybridization networks, usually built from rooted trees, ancestral recombination graphs (ARGs) from binary sequences, median networks from a set of splits , optimal realizations and reticulograms from a distance matrix ), or restrictions to get computationally tractable problems (galled trees, and their generalizations level-k phylogenetic networks, tree-child or tree-sibling phylogenetic networks).
Phylogenetic trees also have trouble depicting microevolutionary events, for example the geographical distribution of muskrat or fish populations of a given species among river networks, because there is no species boundary to prevent gene flow between populations. Therefore, a more general phylogenetic network better depicts these situations. [ 8 ]
A number of different types of unrooted phylogenetic networks are in use like split networks and quasi-median networks . In most cases, such networks only depict relations between taxa, without giving information about the evolutionary history. Although some methods produce unrooted networks that can be interpreted as undirected versions of rooted networks, which do represent a phylogeny.
Rooted phylogenetic networks, like rooted phylogenetic trees, give explicit representations of evolutionary history. This means that they visualize the order in which the species diverged (speciated), converged (hybridized), and transferred genetic material (horizontal gene transfer).
For computational purposes, studies often restrict their attention to classes of networks: subsets of all networks with certain properties. Although computational simplicity is the main goal, most of these classes have a biological justification as well. Some prominent classes currently used in the mathematical phylogenetics literature are tree-child networks, [ 9 ] tree-based networks, [ 10 ] and level-k networks [ 11 ] [ 12 ] | https://en.wikipedia.org/wiki/Phylogenetic_network |
The term phylogenetic niche conservatism has seen increasing use in recent years [ when? ] in the scientific literature, though the exact definition has been a matter of some contention. [ 1 ] Fundamentally, phylogenetic niche conservatism refers to the tendency of species to retain their ancestral traits. When defined as such, phylogenetic niche conservatism is therefore nearly synonymous with phylogenetic signal . The point of contention is whether or not "conservatism" refers simply to the tendency of species to resemble their ancestors, or implies that "closely related species are more similar than expected based on phylogenetic relationships". [ 1 ] If the latter interpretation is employed, then phylogenetic niche conservatism can be seen as an extreme case of phylogenetic signal, and implies that the processes which prevent divergence are in operation in the lineage under consideration. Despite efforts by Jonathan Losos to end this habit, however, the former interpretation appears to frequently motivate scientific research. In this case, phylogenetic niche conservatism might best be considered a form of phylogenetic signal reserved for traits with broad-scale ecological ramifications (i.e. related to the Hutchinsonian niche ). [ 2 ] Thus, phylogenetic niche conservatism is usually invoked with regards to closely related species occurring in similar environments. [ 3 ]
According to a recent review, [ 2 ] the term niche conservatism traces its roots to a book on comparative methods in evolutionary biology. [ 4 ] However, and as these authors also note, the idea is much older. For instance, Darwin observed in the Origin of Species [ 5 ] that species in the same genus tend to resemble one another. This was not a matter of chance, as the entire Linnean taxonomy system is based on classifying species into hierarchically nested groups, e.g. a genus is (and was particularly at the time of Darwin's writing) by definition a collection of similar species. In modern times this pattern has come to be referred to as phylogenetic signal , "the tendency of related species to resemble each other more than species drawn at random from the same tree [ 6 ] ". Methods such as Abouheif’s C, [ 7 ] Pagel's lambda, [ 8 ] Blomberg's K, [ 9 ] and Moran's I [ 10 ] have been employed to test the statistical significance of the pattern. With regards to the term phylogenetic niche conservatism, many authors [ citation needed ] have taken a significant result here—i.e. that phylogenetic information can help "predict" species traits—to be evidence of phylogenetic niche conservatism. Other authors, however, advocate that such a pattern should be expected (i.e. follow from "Descent with modification" [ 5 ] ) and, accordingly, only in instances where species resemble each other more than expected based on their phylogenetic relationships should one invoke the term phylogenetic niche conservatism. [ citation needed ] To take a single statistical test as an example, an unconstrained Brownian motion evolution process will result in a Blomberg's K value of 1; the strict school of thought would only accept a K > 1 as evidence of phylogenetic niche conservatism.
In an influential paper, Wiens and Donoghue [ 3 ] laid out how phylogenetic niche conservatism might help explain the latitudinal diversity gradient . While support for the hypothesis that niche conservatism drives latitudinally structured variation in species richness has been found in some clades, [ 11 ] overall, phylogenetic niche conservatism has not received strong support as the underlying cause responsible for variation in how many species occur in a given habitat. [ 12 ] [ 13 ] It has, however, found considerable support as a factor driving which species occur in a given habitat. [ 13 ] [ 14 ] That is, the study of phylogenetic niche conservatism by itself has not put an end to long-standing debate over what drives the latitudinal diversity gradient across clades, but within specific clades and across specific environmental gradients (as opposed to latitude sensu stricto), it has found support as a factor influencing which lineages are able to persist. [ 15 ] [ 16 ] | https://en.wikipedia.org/wiki/Phylogenetic_niche_conservatism |
Phylogenetic nomenclature is a method of nomenclature for taxa in biology that uses phylogenetic definitions for taxon names as explained below. This contrasts with the traditional method , by which taxon names are defined by a type , which can be a specimen or a taxon of lower rank , and a description in words. [ 1 ] Phylogenetic nomenclature is regulated currently by the International Code of Phylogenetic Nomenclature ( PhyloCode ).
Phylogenetic nomenclature associates names with clades , groups consisting of an ancestor and all its descendants. Such groups are said to be monophyletic . There are slightly different methods of specifying the ancestor, which are discussed below. Once the ancestor is specified, the meaning of the name is fixed: the ancestor and all organisms which are its descendants are included in the taxon named. Listing all these organisms (i.e. providing a full circumscription ) requires the complete phylogenetic tree to be known. In practice, there are almost always one or more hypotheses as to the correct relationship. Different hypotheses result in different organisms being thought to be included in the named taxon, but application to the name in the context of various phylogenies generally remains unambiguous. Possible exceptions occur for apomorphy-based definitions, when optimization of the defining apomorphy is ambiguous. [ 2 ]
Phylogenetic nomenclature assigns names to clades , groups consisting solely of an ancestor and all its descendants. All that is needed to specify a clade, therefore, is to designate the ancestor. There are a number of methods of doing this. Commonly, the ancestor is indicated by its relation to two or more specifiers (species, specimens, or traits) that are mentioned explicitly. The diagram shows three common ways of doing this. For previously defined clades A, B, and C, the clade X can be defined as:
Several other alternatives are provided in the PhyloCode , [ 4 ] (see below ) though there is no attempt to be exhaustive.
Phylogenetic nomenclature allows the use, not only of ancestral relations , but also of the property of being extant . One of the many methods of specifying the Neornithes (modern birds), for example, is:
Neornithes is a crown clade , a clade for which the last common ancestor of its extant members is also the last common ancestor of all its members.
For the PhyloCode , only a clade can receive a "phylogenetic definition", and this restriction is observed in the present article. However, it is also possible to create definitions for the names of other groups that are phylogenetic in the sense that they use only ancestral relations based on species or specimens. [ 5 ] For example, assuming Mammalia and Aves (birds) are defined in this manner, Amniotes could be defined as "the most recent common ancestor of Mammalia and Aves and all its descendants except Mammalia and Aves". This is an example of a paraphyletic group, a clade minus one or more subordinate clades. Names of polyphyletic groups, characterized by a trait that evolved convergently in two or more subgroups, can be defined similarly as the sum of multiple clades. [ 5 ]
Using the traditional nomenclature codes , such as the International Code of Zoological Nomenclature and the International Code of Nomenclature for algae, fungi, and plants , taxa that are not associated explicitly with a rank cannot be named formally, because the application of a name to a taxon is based on both a type and a rank. Thus for example the "family" Hominidae uses the genus Homo as its type; its rank (family) is indicated by the suffix -idae (see discussion below). The requirement for a rank is a major difference between traditional and phylogenetic nomenclature. It has several consequences: it limits the number of nested levels at which names can be applied; it causes the endings of names to change if a group has its rank changed, even if it has precisely the same members (i.e. the same circumscription ); and it is logically inconsistent with all taxa being monophyletic. [ citation needed ]
The current codes have rules stating that names must have certain endings depending on the rank of the taxa to which they are applied. When a group has a different rank in different classifications, its name must have a different suffix. Ereshefsky (1997:512) [ 6 ] gave an example. He noted that Simpson in 1963 and Wiley in 1981 agreed that the same group of genera, which included the genus Homo , should be placed together in a taxon. Simpson treated this taxon as a family, and so gave it the name "Hominidae": "Homin-" from "Homo" and "-idae" as the suffix for family using the zoological code. Wiley considered it to be at the rank of "tribe", and so gave it the name "Hominini", "-ini" being the suffix for tribe. Wiley's tribe Hominini formed only part of a family which he termed "Hominidae". Thus, using the zoological code, two groups with precisely the same circumscription were given different names (Simpson's Hominidae and Wiley's Hominini), and two groups with the same name had different circumscriptions (Simpson's Hominidae and Wiley's Hominidae).
Especially in recent decades (due to advances in phylogenetics ), taxonomists have named many "nested" taxa (i.e. taxa which are contained inside other taxa). No system of nomenclature attempts to name every clade; this would be particularly difficult with traditional nomenclature since every named taxon must be given a lower rank than any named taxon in which it is nested, so the number of names that can be assigned in a nested set of taxa can be no greater than the number of generally recognized ranks. Gauthier et al. (1988) [ 7 ] suggested that, if Reptilia is assigned its traditional rank of "class", then a phylogenetic classification has to assign the rank of genus to Aves. [ 6 ] In such a classification, all ~12,000 known species of extant and extinct birds would then have to be incorporated into this genus.
Various solutions have been proposed while keeping the rank-based nomenclature codes. Patterson and Rosen (1977) [ 8 ] suggested nine new ranks between family and superfamily in order to be able to classify a clade of herrings, and McKenna and Bell (1997) [ 9 ] introduced a large array of new ranks in order to cope with the diversity of Mammalia; these have not been adopted widely. For botany, the Angiosperm Phylogeny Group , responsible for the currently most widely used classification of flowering plants , chose a different method. They retained the traditional ranks of family and order, considering them to be of value for teaching and studying relationships between taxa, but also introduced named clades without formal ranks. [ 10 ]
For phylogenetic nomenclature, ranks have no bearing on the spelling of taxon names (see e.g. Gauthier (1994) [ 11 ] and the PhyloCode ). Ranks are, however, not altogether forbidden for phylogenetic nomenclature. They are merely decoupled from nomenclature: they do not influence which names can be used, which taxa are associated with which names, and which names can refer to nested taxa. [ 12 ] [ 13 ] [ 14 ]
The principles of traditional rank-based nomenclature are incompatible logically with all taxa being strictly monophyletic. [ 12 ] [ 15 ] Every organism must belong to a genus , for example, so there would have to be a genus for every common ancestor of the mammals and the birds. For such a genus to be monophyletic, it would have to include both the class Mammalia and the class Aves. For rank-based nomenclature, however, classes must include genera, not the other way around. [ citation needed ]
The conflict between phylogenetic and traditional nomenclature represents differing opinions of the metaphysics and epistemology of taxa. For the advocates of phylogenetic nomenclature, a taxon is an individual entity, an entity that may gain and lose attributes as time passes. [ 16 ] Just as a person does not become somebody else when his or her properties change through maturation, senility, or more radical changes like amnesia, the loss of a limb, or a change of sex, so a taxon remains the same entity whatever characteristics are gained or lost. [ 17 ] Given the metaphysical claims regarding unobservable entities made by advocates of phylogenetic nomenclature, critics have referred to their method as origin essentialism. [ 18 ] [ 19 ]
For any individual, there has to be something that associates its temporal stages with each other by virtue of which it remains the same entity. For a person, the spatiotemporal continuity of the body provides the relevant conceptual continuity; from infancy to old age, the body traces a continuous path through the world and it is this continuity, rather than any characteristics of the individual, that associates the baby with the octogenarian. [ 20 ] This is similar to the well-known philosophical problem of the Ship of Theseus . For a taxon, IF characteristics are not relevant, THEN it can only be ancestral relations that associate the Devonian Rhyniognatha hirsti with the modern monarch butterfly as representatives, separated by 400 million years, of the taxon Insecta. [ 17 ] The opposing opinion questions the premise of that syllogism, and argues, from an epistemological perspective, that members of taxa are only recognizable empirically on the basis of their observable characteristics, and hypotheses of common ancestry are results of theoretical systematics, not a priori premises. If there are no characteristics that allow scientists to recognize a fossil as belonging to a taxonomic group, then it is just an unclassifiable piece of rock. [ 21 ]
If ancestry is sufficient for the continuity of a taxon, then all descendants of a taxon member will also be included in the taxon, so all bona fide taxa are monophyletic; the names of paraphyletic groups do not merit formal recognition. As " Pelycosauria " refers to a paraphyletic group that includes some Permian tetrapods but not their extant descendants, it cannot be admitted as a valid taxon name. Again, while not disagreeing with the notion that only monophyletic groups should be named, empiricist systematists counter this ancestry essentialism by pointing out that pelycosaurs are recognized as paraphyletic precisely because they exhibit a combination of synapomorphies and symplesiomorphies indicating that some of them are more closely related to mammals than they are to other pelycosaurs. The material existence of an assemblage of fossils and its status as a clade are not the same issue. Monophyletic groups are worthy of attention and naming because they share properties of interest -- synapomorphies -- that are the evidence that allows inference of common ancestry. [ 22 ]
Phylogenetic nomenclature is a semantic extension of the general acceptance of the idea of branching during the course of evolution, represented in the diagrams of Jean-Baptiste Lamarck and later writers like Charles Darwin and Ernst Haeckel . [ 24 ] [ 25 ] In 1866, Haeckel for the first time constructed a single relational diagram of all life based on the existing classification of life accepted at the time. This classification was rank-based, but did not contain taxa that Haeckel considered polyphyletic . In it, Haeckel introduced the rank of phylum which carries a connotation of monophyly in its name (literally meaning "stem"). [ citation needed ]
Ever since, it has been debated in which ways and to what extent the understanding of the phylogeny of life should be used as a basis for its classification, with opinions including "numerical taxonomy" ( phenetics ), " evolutionary taxonomy " (gradistics), and "phylogenetic systematics". From the 1960s onwards, rankless classifications were occasionally proposed, but in general the principles and common language of traditional nomenclature have been used by all three schools of thought. [ citation needed ]
Most of the basic tenets of phylogenetic nomenclature (lack of obligatory ranks, and something close to phylogenetic definitions) can, however, be traced to 1916, when Edwin Goodrich [ 26 ] interpreted the name Sauropsida , defined 40 years earlier by Thomas Henry Huxley , to include the birds ( Aves ) as well as part of Reptilia , and invented the new name Theropsida to include the mammals as well as another part of Reptilia. As these taxa were separate from traditional zoological nomenclature, Goodrich did not emphasize ranks, but he clearly discussed the diagnostic features necessary to recognize and classify fossils belonging to the various groups. For example, in regard to the fifth metatarsal of the hind leg, he said "the facts support our view, for these early reptiles have normal metatarsals like their Amphibian ancestors. It is clear, then, that we have here a valuable corroborative character to help us to decide whether a given species belongs to the Theropsidan or the Sauropsidan line of evolution." Goodrich concluded his paper: "The possession of these characters shows that all living Reptilia belong to the Sauropsidan group, while the structure of the foot enables us to determine the affinities of many incompletely known fossil genera, and to conclude that only certain extinct orders can belong to the Theropsidan branch." Goodrich opined that the name Reptilia should be abandoned once the phylogeny of the reptiles was better known. [ citation needed ]
The principle that only clades should be named formally became popular among some researchers during the second half of the 20th century. It spread together with the methods for discovering clades ( cladistics ) and is an integral part of phylogenetic systematics (see above). At the same time, it became apparent that the obligatory ranks that are part of the traditional systems of nomenclature produced problems. Some authors suggested abandoning them altogether, starting with Willi Hennig 's abandonment [ 27 ] of his earlier proposal to define ranks as geological age classes. [ 28 ] [ 29 ]
The first use of phylogenetic nomenclature in a publication can be dated to 1986. [ 30 ] Theoretical papers outlining the principles of phylogenetic nomenclature, as well as further publications containing applications of phylogenetic nomenclature (mostly to vertebrates), soon followed (see Literature section).
In an attempt to avoid a schism among the systematics community, " Gauthier suggested to two members of the ICZN to apply formal taxonomic names ruled by the zoological code only to clades (at least for supraspecific taxa) and to abandon Linnean ranks, but these two members promptly rejected these ideas". [ 31 ] The premise of names in traditional nomenclature is based, ultimately, on type specimens, and the circumscription of groups is considered a taxonomic choice made by the systematists working on particular groups, rather than a nomenclatural decision made based on a priori rules of the Codes on Nomenclature . [ 32 ] The desire to subsume taxonomic circumscriptions within nomenclatural definitions caused Kevin de Queiroz and the botanist Philip Cantino to start drafting their own code of nomenclature, the PhyloCode , to regulate phylogenetic nomenclature. [ citation needed ]
Willi Hennig 's pioneering work provoked a controversy [ 33 ] about the relative merits of phylogenetic nomenclature versus Linnaean taxonomy, or the related method of evolutionary taxonomy , which has continued to the present. [ 34 ] Some of the controversies with which the cladists were engaged had been happening since the 19th century. [ 35 ] While Hennig insisted that different classification schemes were useful for different purposes, [ 36 ] he gave primacy to his own, claiming that the categories of his system had "individuality and reality" in contrast to the "timeless abstractions" of classifications based on overall similarity. [ 37 ]
Formal classifications based on cladistic reasoning are said to emphasize ancestry at the expense of descriptive characteristics. Nonetheless, most taxonomists presently avoid paraphyletic groups whenever they think it is possible within Linnaean taxonomy; polyphyletic taxa have long been unfashionable. Many cladists claim that the traditional Codes of Zoological and Botanical Nomenclature are fully compatible with cladistic methods, and that there is no need to reinvent a system of names that has functioned well for 250 years, [ 38 ] [ 39 ] [ 40 ] but others argue that this system is not as effective as it should be and that it is time to adopt nomenclatural principles that represent divergent evolution as a mechanism that explains much of the known biodiversity. [ 41 ] [ 42 ] In fact, calls to reform biological nomenclature were made even before phylogenetic nomenclature was developed. [ 43 ]
The ICPN , or PhyloCode , is a code of rules and recommendations for phylogenetic nomenclature.
The number of supporters for widespread adoption of the PhyloCode is still small, and it is uncertain how widely it will be followed.
A few publications not cited in the references are cited here. An exhaustive list of publications about phylogenetic nomenclature can be found on the website of the International Society for Phylogenetic Nomenclature . | https://en.wikipedia.org/wiki/Phylogenetic_nomenclature |
Phylogenetic profiling is a bioinformatics technique in which the joint presence or joint absence of two traits across large numbers of species is used to infer a meaningful biological connection, such as involvement of two different proteins in the same biological pathway . Along with examination of conserved synteny , conserved operon structure, or "Rosetta Stone" domain fusions , comparing phylogenetic profiles is a designated "post-homology" technique, in that the computation essential to this method begins after it is determined which proteins are homologous to which. A number of these techniques were developed by David Eisenberg and colleagues; phylogenetic profile comparison was introduced in 1999 by Pellegrini, et al. [ 1 ]
Over 2000 species of bacteria , archaea , and eukaryotes are now represented by complete DNA genome sequences. Typically, each gene in a genome encodes a protein that can be assigned to a particular protein family on the basis of homology . For a given protein family, its presence or absence in each genome (in the original, binary, formulation) is represented by either 1 (present) or 0 (absent). Consequently, the phylogenetic distribution of the protein family can be represented by a long binary number with a digit for each genome; such binary representations are easily compared with each other to search for correlated phylogenetic distributions. The large number of complete genomes makes these profiles rich in information. The advantage of using only complete genomes is that the 0 values, representing the absence of a trait, tend to be reliable.
Closely related species should be expected to have very similar sets of genes. However, changes accumulate between more distantly related species by processes that include horizontal gene transfer and gene loss. Individual proteins have specific molecular functions, such as carrying out a single enzymatic reaction or serving as one subunit of a larger protein complex. A biological process such as photosynthesis , methanogenesis , or histidine biosynthesis may require the concerted action of many proteins. If some protein critical to a process is lost, other proteins dedicated to that process would become useless; natural selection makes it unlikely these useless proteins will be retained over evolutionary time. Therefore, should two different protein families consistently tend to be either present or absent together, a likely hypothesis is that the two proteins cooperate in some biological process.
Phylogenetic profiling has led to numerous discoveries in biology, including previously unknown enzymes in metabolic pathways , transcription factors that bind to conserved regulatory sites , and explanations for roles of certain mutations in human disease . [ 2 ] Improving the method itself is an active area of scientific research because the method itself faces several limitations. First, co-occurrence of two protein families often represents recent common ancestry of two species rather than a conserved functional relationship; disambiguating these two sources of correlation may require improved statistical methods. Second, proteins grouped as homologs may differ in function, or proteins conserved in function may fail to register as homologs; improved methods for tailoring the size of each protein family to reflect functional conservation will lead to improved results.
Tools include PLEX (Protein Link Explorer). [ 3 ] (Now defunct) and JGI IMG (Integrated Microbial Genomes) Phylogenetic Profiler (for both single genes and gene cassettes ). [ 4 ] | https://en.wikipedia.org/wiki/Phylogenetic_profiling |
Phylogenetic signal is an evolutionary and ecological term, that describes the tendency or the pattern of related biological species to resemble each other more than any other species that is randomly picked from the same phylogenetic tree . [ 1 ] [ 2 ]
Phylogenetic signal is usually described as the tendency of related biological species to resemble each other more than any other species that is randomly picked from the same phylogenetic tree. [ 1 ] [ 2 ] In other words, phylogenetic signal can be defined as the statistical dependence among species' trait values that is a consequence of their phylogenetic relationships. [ 3 ] The traits (e.g. morphological , ecological, life-history or behavioural traits) are inherited characteristics [ 4 ] – meaning the trait values are usually alike within closely related species, while trait values of distantly related biological species do not resemble each other to a such great degree. [ 5 ] It is often said that traits that are more similar in closely related taxa than in distant relatives exhibit greater phylogenetic signal. On the other hand, some traits are a consequence of convergent evolution and appear more similar in distantly related taxa than in relatives. Such traits show lower phylogenetic signal. [ 4 ]
Phylogenetic signal is a measure, closely related with an evolutionary process and development of taxa . It is thought that high rate of evolution leads to low phylogenetic signal and vice versa (hence, high phylogenetic signal is usually a consequence of either low rate of evolution either stabilizing type of selection ). [ 3 ] Similarly high value of phylogenetic signal results in an existence of similar traits between closely related biological species, while increasing evolutionary distance between related species leads to decrease in similarity. [ 4 ] With a help of phylogenetic signal we can quantify to what degree closely related biological taxa share similar traits. [ 6 ]
On the other hand, some authors advise against such interpretations (the ones based on estimates of phylogenetic signal) of evolutionary rate and process. While studying simple models for quantitative trait evolution, such as the homogeneous rate genetic drift , it appears to be no relation between phylogenetic signal and rate of evolution. Within other models (e.g. functional constraint, fluctuating selection , phylogenetic niche conservatism , evolutionary heterogeneity etc.) relations between evolutionary rate, evolutionary process and phylogenetic signal are more complex, and can not be easily generalized using mentioned perception of the relation between two phenomenons . [ 3 ] Some authors argue that phylogenetic signal is not always strong in each clade and for each trait. It is also not clear if all of the possible traits do exhibit phylogenetic signal and if it is measurable. [ 4 ]
Phylogenetic signal is a concept widely used in different ecological and evolutionary studies. [ 7 ]
Among many questions that can be answered using a concept of phylogenetic signal, the most common ones are: [ 1 ]
Quantifying phylogenetic signal can be done using a range of various methods that are used for researching biodiversity in an aspect of evolutionary relatedness. With a help of measuring phylogenetic signal one can determine exactly how studied traits are correlated with phylogenetic relationship between species. [ 4 ]
Some of the earliest ways of quantifying phylogenetic signal were based on the use of various statistical methods (such as phylogenetic autocorrelation coefficients, phylogenetic correlograms , as well as autoregressive models ). With a help of the mentioned methods one is able to quantify the value of phylogenetic autocorrelation for a studied trait throughout the phylogeny. [ 13 ] Another method commonly used in studying phylogenetic signal is so-called Brownian diffusion model of trait evolution that is based on the Brownian motion (BM) principle. [ 7 ] [ 14 ] Using Brownian diffusion model, one can not only study values but also compare those measures between various phylogenies. [ 1 ] Phylogenetic signal in continuous traits can be quantified and measured using K-statistic . [ 3 ] [ 15 ] Within this technique values from zero to infinity are used and higher value also means greater level of phylogenetic signal. [ 15 ]
The table below shows the most common indices and associated tests used for analyzing phylogenetic signal. [ 1 ] | https://en.wikipedia.org/wiki/Phylogenetic_signal |
A phylogenetic tree , phylogeny or evolutionary tree is a graphical representation which shows the evolutionary history between a set of species or taxa during a specific time. [ 1 ] [ 2 ] In other words, it is a branching diagram or a tree showing the evolutionary relationships among various biological species or other entities based upon similarities and differences in their physical or genetic characteristics. In evolutionary biology, all life on Earth is theoretically part of a single phylogenetic tree, indicating common ancestry . Phylogenetics is the study of phylogenetic trees. The main challenge is to find a phylogenetic tree representing optimal evolutionary ancestry between a set of species or taxa. Computational phylogenetics (also phylogeny inference) focuses on the algorithms involved in finding optimal phylogenetic tree in the phylogenetic landscape. [ 1 ] [ 2 ]
Phylogenetic trees may be rooted or unrooted. In a rooted phylogenetic tree, each node with descendants represents the inferred most recent common ancestor of those descendants, [ 3 ] and the edge lengths in some trees may be interpreted as time estimates. Each node is called a taxonomic unit. Internal nodes are generally called hypothetical taxonomic units, as they cannot be directly observed. Trees are useful in fields of biology such as bioinformatics , systematics , and phylogenetics . Unrooted trees illustrate only the relatedness of the leaf nodes and do not require the ancestral root to be known or inferred.
The idea of a tree of life arose from ancient notions of a ladder-like progression from lower into higher forms of life (such as in the Great Chain of Being ). Early representations of "branching" phylogenetic trees include a "paleontological chart" showing the geological relationships among plants and animals in the book Elementary Geology , by Edward Hitchcock (first edition: 1840).
Charles Darwin featured a diagrammatic evolutionary "tree" in his 1859 book On the Origin of Species . Over a century later, evolutionary biologists still use tree diagrams to depict evolution because such diagrams effectively convey the concept that speciation occurs through the adaptive and semirandom splitting of lineages.
The term phylogenetic , or phylogeny , derives from the two ancient greek words φῦλον ( phûlon ), meaning "race, lineage", and γένεσις ( génesis ), meaning "origin, source". [ 4 ] [ 5 ]
A rooted phylogenetic tree (see two graphics at top) is a directed tree with a unique node — the root — corresponding to the (usually imputed ) most recent common ancestor of all the entities at the leaves of the tree. The root node does not have a parent node, but serves as the parent of all other nodes in the tree. The root is therefore a node of degree 2, while other internal nodes have a minimum degree of 3 (where "degree" here refers to the total number of incoming and outgoing edges). [ citation needed ]
The most common method for rooting trees is the use of an uncontroversial outgroup —close enough to allow inference from trait data or molecular sequencing, but far enough to be a clear outgroup. Another method is midpoint rooting, or a tree can also be rooted by using a non-stationary substitution model . [ 6 ]
Unrooted trees illustrate the relatedness of the leaf nodes without making assumptions about ancestry. They do not require the ancestral root to be known or inferred. [ 8 ] Rooted trees can be generated from unrooted ones by inserting a root. Inferring the root of an unrooted tree requires some means of identifying ancestry. This is normally done by including an outgroup in the input data so that the root is necessarily between the outgroup and the rest of the taxa in the tree, or by introducing additional assumptions about the relative rates of evolution on each branch, such as an application of the molecular clock hypothesis . [ 9 ]
Both rooted and unrooted trees can be either bifurcating or multifurcating. A rooted bifurcating tree has exactly two descendants arising from each interior node (that is, it forms a binary tree ), and an unrooted bifurcating tree takes the form of an unrooted binary tree , a free tree with exactly three neighbors at each internal node. In contrast, a rooted multifurcating tree may have more than two children at some nodes and an unrooted multifurcating tree may have more than three neighbors at some nodes. [ citation needed ]
Both rooted and unrooted trees can be either labeled or unlabeled. A labeled tree has specific values assigned to its leaves, while an unlabeled tree, sometimes called a tree shape, defines a topology only. Some sequence-based trees built from a small genomic locus, such as Phylotree, [ 10 ] feature internal nodes labeled with inferred ancestral haplotypes.
The number of possible trees for a given number of leaf nodes depends on the specific type of tree, but there are always more labeled than unlabeled trees, more multifurcating than bifurcating trees, and more rooted than unrooted trees. The last distinction is the most biologically relevant; it arises because there are many places on an unrooted tree to put the root. For bifurcating labeled trees, the total number of rooted trees is:
For bifurcating labeled trees, the total number of unrooted trees is: [ 11 ]
Among labeled bifurcating trees, the number of unrooted trees with n {\displaystyle n} leaves is equal to the number of rooted trees with n − 1 {\displaystyle n-1} leaves. [ 2 ]
The number of rooted trees grows quickly as a function of the number of tips. For 10 tips, there are more than 34 × 10 6 {\displaystyle 34\times 10^{6}} possible bifurcating trees, and the number of multifurcating trees rises faster, with ca. 7 times as many of the latter as of the former.
A dendrogram is a general name for a tree, whether phylogenetic or not, and hence also for the diagrammatic representation of a phylogenetic tree. [ 12 ]
A cladogram only represents a branching pattern; i.e., its branch lengths do not represent time or relative amount of character change, and its internal nodes do not represent ancestors. [ 13 ]
A phylogram is a phylogenetic tree that has branch lengths proportional to the amount of character change. [ 14 ]
A chronogram is a phylogenetic tree that explicitly represents time through its branch lengths. [ 15 ]
A Dahlgrenogram is a diagram representing a cross section of a phylogenetic tree. [ citation needed ]
A phylogenetic network is not strictly speaking a tree, but rather a more general graph , or a directed acyclic graph in the case of rooted networks. They are used to overcome some of the limitations inherent to trees.
A spindle diagram, or bubble diagram, is often called a romerogram, after its popularisation by the American palaeontologist Alfred Romer . [ 17 ] It represents taxonomic diversity (horizontal width) against geological time (vertical axis) in order to reflect the variation of abundance of various taxa through time.
A spindle diagram is not an evolutionary tree: [ 18 ] the taxonomic spindles obscure the actual relationships of the parent taxon to the daughter taxon [ 17 ] and have the disadvantage of involving the paraphyly of the parental group. [ 19 ] This type of diagram is no longer used in the form originally proposed. [ 19 ]
Darwin [ 20 ] also mentioned that the coral may be a more suitable metaphor than the tree . Indeed, phylogenetic corals are useful for portraying past and present life, and they have some advantages over trees ( anastomoses allowed, etc.). [ 19 ]
Phylogenetic trees composed with a nontrivial number of input sequences are constructed using computational phylogenetics methods. Distance-matrix methods such as neighbor-joining or UPGMA , which calculate genetic distance from multiple sequence alignments , are simplest to implement, but do not invoke an evolutionary model. Many sequence alignment methods such as ClustalW also create trees by using the simpler algorithms (i.e. those based on distance) of tree construction. Maximum parsimony is another simple method of estimating phylogenetic trees, but implies an implicit model of evolution (i.e. parsimony). More advanced methods use the optimality criterion of maximum likelihood , often within a Bayesian framework , and apply an explicit model of evolution to phylogenetic tree estimation. [ 2 ] Identifying the optimal tree using many of these techniques is NP-hard , [ 2 ] so heuristic search and optimization methods are used in combination with tree-scoring functions to identify a reasonably good tree that fits the data.
Tree-building methods can be assessed on the basis of several criteria: [ 21 ]
Tree-building techniques have also gained the attention of mathematicians. Trees can also be built using T-theory . [ 22 ]
Trees can be encoded in a number of different formats, all of which must represent the nested structure of a tree. They may or may not encode branch lengths and other features. Standardized formats are critical for distributing and sharing trees without relying on graphics output that is hard to import into existing software. Commonly used formats are
Although phylogenetic trees produced on the basis of sequenced genes or genomic data in different species can provide evolutionary insight, these analyses have important limitations. Most importantly, the trees that they generate are not necessarily correct – they do not necessarily accurately represent the evolutionary history of the included taxa. As with any scientific result, they are subject to falsification by further study (e.g., gathering of additional data, analyzing the existing data with improved methods). The data on which they are based may be noisy ; [ 24 ] the analysis can be confounded by genetic recombination , [ 25 ] horizontal gene transfer , [ 26 ] hybridisation between species that were not nearest neighbors on the tree before hybridisation takes place, and conserved sequences .
Also, there are problems in basing an analysis on a single type of character, such as a single gene or protein or only on morphological analysis, because such trees constructed from another unrelated data source often differ from the first, and therefore great care is needed in inferring phylogenetic relationships among species. This is most true of genetic material that is subject to lateral gene transfer and recombination , where different haplotype blocks can have different histories. In these types of analysis, the output tree of a phylogenetic analysis of a single gene is an estimate of the gene's phylogeny (i.e. a gene tree) and not the phylogeny of the taxa (i.e. species tree) from which these characters were sampled, though ideally, both should be very close. For this reason, serious phylogenetic studies generally use a combination of genes that come from different genomic sources (e.g., from mitochondrial or plastid vs. nuclear genomes), [ 27 ] or genes that would be expected to evolve under different selective regimes, so that homoplasy (false homology ) would be unlikely to result from natural selection.
When extinct species are included as terminal nodes in an analysis (rather than, for example, to constrain internal nodes), they are considered not to represent direct ancestors of any extant species. Extinct species do not typically contain high-quality DNA .
The range of useful DNA materials has expanded with advances in extraction and sequencing technologies. Development of technologies able to infer sequences from smaller fragments, or from spatial patterns of DNA degradation products, would further expand the range of DNA considered useful.
Phylogenetic trees can also be inferred from a range of other data types, including morphology, the presence or absence of particular types of genes, insertion and deletion events – and any other observation thought to contain an evolutionary signal.
Phylogenetic networks are used when bifurcating trees are not suitable, due to these complications which suggest a more reticulate evolutionary history of the organisms sampled. | https://en.wikipedia.org/wiki/Phylogenetic_tree |
In biology , phylogenetics ( / ˌ f aɪ l oʊ dʒ ə ˈ n ɛ t ɪ k s , - l ə -/ ) [ 1 ] [ 2 ] [ 3 ] is the study of the evolutionary history of life using observable characteristics of organisms (or genes), which is known as phylogenetic inference . It infers the relationship among organisms based on empirical data and observed heritable traits of DNA sequences, protein amino acid sequences, and morphology . The results are a phylogenetic tree —a diagram depicting the hypothetical relationships among the organisms, reflecting their inferred evolutionary history. [ 4 ]
The tips of a phylogenetic tree represent the observed entities, which can be living taxa or fossils . A phylogenetic diagram can be rooted or unrooted. A rooted tree diagram indicates the hypothetical common ancestor of the taxa represented on the tree. An unrooted tree diagram (a network) makes no assumption about directionality of character state transformation, and does not show the origin or "root" of the taxa in question. [ 5 ]
In addition to their use for inferring phylogenetic patterns among taxa, phylogenetic analyses are often employed to represent relationships among genes or individual organisms. Such uses have become central to understanding biodiversity , evolution, ecology , and genomes .
Phylogenetics is a component of systematics that uses similarities and differences of the characteristics of species to interpret their evolutionary relationships and origins. [ 6 ]
In the field of cancer research, phylogenetics can be used to study the clonal evolution of tumors and molecular chronology , predicting and showing how cell populations vary throughout the progression of the disease and during treatment, using whole genome sequencing techniques. [ 7 ] Because cancer cells reproduce mitotically, the evolutionary processes behind cancer progression are quite different from those in sexually-reproducing species. These differences manifest in several areas: the types of aberrations that occur, the rates of mutation , the high heterogeneity (variability) of tumor cell subclones, and the absence of genetic recombination . [ 8 ] [ 9 ]
Phylogenetics can also aid in drug design and discovery. Phylogenetics allows scientists to organize species and can show which species are likely to have inherited particular traits that are medically useful, such as producing biologically active compounds - those that have effects on the human body. For example, in drug discovery, venom -producing animals are particularly useful. Venoms from these animals produce several important drugs, e.g., ACE inhibitors and Prialt ( Ziconotide ). To find new venoms, scientists turn to phylogenetics to screen for closely related species that may have the same useful traits.
The phylogenetic tree shows venomous species of fish , and related fish they may also contain the trait. Using this approach, biologists are able to identify the fish, snake and lizard species that may be venomous. [ 10 ]
In forensic science , phylogenetic tools are useful to assess DNA evidence for court cases. Phylogenetic analysis has been used in criminal trials to exonerate or hold individuals.
HIV forensics uses phylogenetic analysis to track the differences in HIV genes and determine the relatedness of two samples. HIV forensics have limitations, i.e., it cannot be the sole proof of transmission between individuals, and phylogenetic analysis which shows transmission relatedness does not indicate direction of transmission. [ 11 ]
Taxonomy is the identification, naming, and classification of organisms. [ 6 ] The Linnaean classification system developed in the 1700s by Carolus Linnaeus is the foundation for modern classification methods. Linnaean classification traditionally relied on the phenotypes or physical characteristics of organisms to group species. [ 12 ] With the emergence of biochemistry , classifications of organisms are now often based on DNA sequence data or a combination of DNA and morphology. Many systematists contend that only monophyletic taxa should be recognized as named groups. [ 13 ] The degree to which classification depends on inferred evolutionary history differs depending on the school of taxonomy: phenetics ignores phylogenetic speculation altogether, trying to represent the similarity between organisms instead; cladistics (phylogenetic systematics) tries to reflect phylogeny in its classifications by only recognizing groups based on shared, derived characters ( synapomorphies ); evolutionary taxonomy tries to take into account both the branching pattern and "degree of difference" to find a compromise between inferred patterns of common ancestry and evolutionary distinctness.
Usual methods of phylogenetic inference involve computational approaches implementing an optimality criterion and methods of parsimony , maximum likelihood (ML), and MCMC -based Bayesian inference . All these depend upon an implicit or explicit mathematical model describing the relative probabilities of character state transformation within and among the characters observed. [ 14 ]
Phenetics , popular in the mid-20th century but now largely obsolete, used distance matrix -based methods to construct trees based on overall similarity in morphology or similar observable traits, which was often assumed to approximate phylogenetic relationships. Neighbor Joining is a phenetic method that is often used for building similarity trees for DNA barcodes .
Prior to 1950, phylogenetic inferences were generally presented as narrative scenarios. Such methods were often ambiguous and lacked explicit criteria for evaluating alternative hypotheses. [ 15 ] [ 16 ] [ 17 ]
In phylogenetic analysis, taxon sampling selects a small group of exemplar taxa to infer the evolutionary history of a clade. [ 18 ] This process is also known as stratified sampling or clade-based sampling. [ 19 ] Judicious taxon sampling is important, given limited resources to compare and analyze every species within a diverse clade, and also given the computational limits of phylogenetic software. [ 18 ] Poor taxon sampling may result in incorrect phylogenetic inferences. [ 19 ] Long branch attraction , in which nonrelated branches are incorrectly grouped by shared, homoplastic nucleotide sites, is an theoretical cause for inaccuracy [ 18 ]
There are debates if increasing the number of taxa sampled improves phylogenetic accuracy more than increasing the number of genes sampled per taxon. Differences in each method's sampling impact the number of nucleotide sites utilized in a sequence alignment, which may contribute to disagreements. For example, phylogenetic trees constructed utilizing a more significant number of total nucleotides are generally more accurate, as supported by phylogenetic trees' bootstrapping replicability from random sampling.
The graphic presented in Taxon Sampling, Bioinformatics, and Phylogenomics , compares the correctness of phylogenetic trees generated using fewer taxa and more sites per taxon on the x-axis to more taxa and fewer sites per taxon on the y-axis. With fewer taxa, more genes are sampled amongst the taxonomic group; in comparison, with more taxa added to the taxonomic sampling group, fewer genes are sampled. Each method has the same total number of nucleotide sites sampled. Furthermore, the dotted line represents a 1:1 accuracy between the two sampling methods. As seen in the graphic, most of the plotted points are located below the dotted line, which indicates gravitation toward increased accuracy when sampling fewer taxa with more sites per taxon. The research performed utilizes four different phylogenetic tree construction models to verify the theory; neighbor-joining (NJ), minimum evolution (ME), unweighted maximum parsimony (MP), and maximum likelihood (ML). In the majority of models, sampling fewer taxon with more sites per taxon demonstrated higher accuracy.
Generally, with the alignment of a relatively equal number of total nucleotide sites, sampling more genes per taxon has higher bootstrapping replicability than sampling more taxa. However, unbalanced datasets within genomic databases make increasing the gene comparison per taxon in uncommonly sampled organisms increasingly difficult. [ 19 ]
The term "phylogeny" derives from the German Phylogenie , introduced by Haeckel in 1866, [ 20 ] and the Darwinian approach to classification became known as the "phyletic" approach. [ 21 ] It can be traced back to Aristotle , who wrote in his Posterior Analytics , "We may assume the superiority ceteris paribus [other things being equal] of the demonstration which derives from fewer postulates or hypotheses."
The modern concept of phylogenetics evolved primarily as a disproof of a previously widely accepted theory. During the late 19th century, Ernst Haeckel 's recapitulation theory , or "biogenetic fundamental law", was widely popular. [ 22 ] It was often expressed as " ontogeny recapitulates phylogeny", i.e. the development of a single organism during its lifetime, from germ to adult, successively mirrors the adult stages of successive ancestors of the species to which it belongs. But this theory has long been rejected. [ 23 ] [ 24 ] Instead, ontogeny evolves – the phylogenetic history of a species cannot be read directly from its ontogeny, as Haeckel thought would be possible, but characters from ontogeny can be (and have been) used as data for phylogenetic analyses; the more closely related two species are, the more apomorphies their embryos share.
One use of phylogenetic analysis involves the pharmacological examination of closely related groups of organisms. Advances in cladistics analysis through faster computer programs and improved molecular techniques have increased the precision of phylogenetic determination, allowing for the identification of species with pharmacological potential.
Historically, phylogenetic screens for pharmacological purposes were used in a basic manner, such as studying the Apocynaceae family of plants, which includes alkaloid-producing species like Catharanthus , known for producing vincristine , an antileukemia drug. Modern techniques now enable researchers to study close relatives of a species to uncover either a higher abundance of important bioactive compounds (e.g., species of Taxus for taxol) or natural variants of known pharmaceuticals (e.g., species of Catharanthus for different forms of vincristine or vinblastine). [ 86 ]
Phylogenetic analysis has also been applied to biodiversity studies within the fungi family. Phylogenetic analysis helps understand the evolutionary history of various groups of organisms, identify relationships between different species, and predict future evolutionary changes. Emerging imagery systems and new analysis techniques allow for the discovery of more genetic relationships in biodiverse fields, which can aid in conservation efforts by identifying rare species that could benefit ecosystems globally.
Whole-genome sequence data from outbreaks or epidemics of infectious diseases can provide important insights into transmission dynamics and inform public health strategies. Traditionally, studies have combined genomic and epidemiological data to reconstruct transmission events. However, recent research has explored deducing transmission patterns solely from genomic data using phylodynamics , which involves analyzing the properties of pathogen phylogenies. Phylodynamics uses theoretical models to compare predicted branch lengths with actual branch lengths in phylogenies to infer transmission patterns. Additionally, coalescent theory , which describes probability distributions on trees based on population size, has been adapted for epidemiological purposes. Another source of information within phylogenies that has been explored is "tree shape." These approaches, while computationally intensive, have the potential to provide valuable insights into pathogen transmission dynamics. [ 87 ]
The structure of the host contact network significantly impacts the dynamics of outbreaks, and management strategies rely on understanding these transmission patterns. Pathogen genomes spreading through different contact network structures, such as chains, homogeneous networks, or networks with super-spreaders, accumulate mutations in distinct patterns, resulting in noticeable differences in the shape of phylogenetic trees, as illustrated in Fig. 1. Researchers have analyzed the structural characteristics of phylogenetic trees generated from simulated bacterial genome evolution across multiple types of contact networks. By examining simple topological properties of these trees, researchers can classify them into chain-like, homogeneous, or super-spreading dynamics, revealing transmission patterns. These properties form the basis of a computational classifier used to analyze real-world outbreaks. Computational predictions of transmission dynamics for each outbreak often align with known epidemiological data.
Different transmission networks result in quantitatively different tree shapes. To determine whether tree shapes captured information about underlying disease transmission patterns, researchers simulated the evolution of a bacterial genome over three types of outbreak contact networks—homogeneous, super-spreading, and chain-like. They summarized the resulting phylogenies with five metrics describing tree shape. Figures 2 and 3 illustrate the distributions of these metrics across the three types of outbreaks, revealing clear differences in tree topology depending on the underlying host contact network.
Super-spreader networks give rise to phylogenies with higher Colless imbalance, longer ladder patterns, lower Δw, and deeper trees than those from homogeneous contact networks. Trees from chain-like networks are less variable, deeper, more imbalanced, and narrower than those from other networks.
Scatter plots can be used to visualize the relationship between two variables in pathogen transmission analysis, such as the number of infected individuals and the time since infection. These plots can help identify trends and patterns, such as whether the spread of the pathogen is increasing or decreasing over time, and can highlight potential transmission routes or super-spreader events. Box plots displaying the range, median, quartiles, and potential outliers datasets can also be valuable for analyzing pathogen transmission data, helping to identify important features in the data distribution. They may be used to quickly identify differences or similarities in the transmission data. [ 87 ]
Phylogenetic tools and representations (trees and networks) can also be applied to philology , the study of the evolution of oral languages and written text and manuscripts, such as in the field of quantitative comparative linguistics . [ 89 ]
Computational phylogenetics can be used to investigate a language as an evolutionary system. The evolution of human language closely corresponds with human's biological evolution which allows phylogenetic methods to be applied. The concept of a "tree" serves as an efficient way to represent relationships between languages and language splits. It also serves as a way of testing hypotheses about the connections and ages of language families. For example, relationships among languages can be shown by using cognates as characters. [ 90 ] [ 91 ] The phylogenetic tree of Indo-European languages shows the relationships between several of the languages in a timeline, as well as the similarity between words and word order.
There are three types of criticisms about using phylogenetics in philology, the first arguing that languages and species are different entities, therefore you can not use the same methods to study both. The second being how phylogenetic methods are being applied to linguistic data. And the third, discusses the types of data that is being used to construct the trees. [ 90 ]
Bayesian phylogenetic methods, which are sensitive to how treelike the data is, allow for the reconstruction of relationships among languages, locally and globally. The main two reasons for the use of Bayesian phylogenetics are that (1) diverse scenarios can be included in calculations and (2) the output is a sample of trees and not a single tree with true claim. [ 92 ]
The same process can be applied to texts and manuscripts. In Paleography , the study of historical writings and manuscripts, texts were replicated by scribes who copied from their source and alterations - i.e., 'mutations' - occurred when the scribe did not precisely copy the source. [ 93 ]
Phylogenetics has been applied to archaeological artefacts such as the early hominin hand-axes, [ 94 ] late Palaeolithic figurines, [ 95 ] Neolithic stone arrowheads, [ 96 ] Bronze Age ceramics, [ 97 ] and historical-period houses. [ 98 ] Bayesian methods have also been employed by archaeologists in an attempt to quantify uncertainty in the tree topology and divergence times of stone projectile point shapes in the European Final Palaeolithic and earliest Mesolithic. [ 99 ] | https://en.wikipedia.org/wiki/Phylogenetics |
Phylogenomics is the intersection of the fields of evolution and genomics . [ 1 ] The term has been used in multiple ways to refer to analysis that involves genome data and evolutionary reconstructions. [ 2 ] It is a group of techniques within the larger fields of phylogenetics and genomics. Phylogenomics draws information by comparing entire genomes, or at least large portions of genomes. [ 3 ] Phylogenetics compares and analyzes the sequences of single genes, or a small number of genes, as well as many other types of data. Four major areas fall under phylogenomics:
The ultimate goal of phylogenomics is to reconstruct the evolutionary history of species through their genomes. This history is usually inferred from a series of genomes by using a genome evolution model and standard statistical inference methods (e.g. Bayesian inference or maximum likelihood estimation ). [ 4 ]
When Jonathan Eisen originally coined phylogenomics , it applied to prediction of gene function. Before the use of phylogenomic techniques, predicting gene function was done primarily by comparing the gene sequence with the sequences of genes with known functions. When several genes with similar sequences but differing functions are involved, this method alone is ineffective in determining function. A specific example is presented in the paper "Gastronomic Delights: A movable feast". [ 5 ] Gene predictions based on sequence similarity alone had been used to predict that Helicobacter pylori can repair mismatched DNA . [ 6 ] This prediction was based on the fact that this organism has a gene for which the sequence is highly similar to genes from other species in the "MutS" gene family which included many known to be involved in mismatch repair. However, Eisen noted that H. pylori lacks other genes thought to be essential for this function (specifically, members of the MutL family). Eisen suggested a solution to this apparent discrepancy – phylogenetic trees of genes in the MutS family revealed that the gene found in H. pylori was not in the same subfamily as those known to be involved in mismatch repair. [ 5 ] Furthermore, he suggested that this "phylogenomic" approach could be used as a general method for prediction functions of genes. This approach was formally described in 1998. [ 7 ] For reviews of this aspect of phylogenomics see Brown D, Sjölander K. Functional classification using phylogenomic inference. [ 8 ] [ 9 ]
Traditional phylogenetic techniques have difficulty establishing differences between genes that are similar because of lateral gene transfer and those that are similar because the organisms shared an ancestor. By comparing large numbers of genes or entire genomes among many species, it is possible to identify transferred genes, since these sequences behave differently from what is expected given the taxonomy of the organism. Using these methods, researchers were able to identify over 2,000 metabolic enzymes obtained by various eukaryotic parasites from lateral gene transfer. [ 10 ]
The comparison of complete gene sets for a group of organisms allows the identification of events in gene evolution such as gene duplication or gene deletion . Often, such events are evolutionarily relevant. For example, multiple duplications of genes encoding degradative enzymes of certain families is a common adaptation in microbes to new nutrient sources. On the contrary, loss of genes is important in reductive evolution , such as in intracellular parasites or symbionts. Whole genome duplication events, which potentially duplicate all the genes in a genome at once, are drastic evolutionary events with great relevance in the evolution of many clades, and whose signal can be traced with phylogenomic methods.
Traditional single-gene studies are effective in establishing phylogenetic trees among closely related organisms, but have drawbacks when comparing more distantly related organisms or microorganisms. This is because of lateral gene transfer , convergence , and varying rates of evolution for different genes. By using entire genomes in these comparisons, the anomalies created from these factors are overwhelmed by the pattern of evolution indicated by the majority of the data. [ 11 ] [ 12 ] [ 13 ] Through phylogenomics , it has been discovered that most of the photosynthetic eukaryotes are linked and possibly share a single ancestor. Researchers compared 135 genes from 65 different species of photosynthetic organisms. These included plants , alveolates , rhizarians , haptophytes and cryptomonads . [ 14 ] This has been referred to as the Plants+HC+SAR megagroup . Using this method, it is theoretically possible to create fully resolved phylogenetic trees, and timing constraints can be recovered more accurately. [ 15 ] [ 16 ] However, in practice this is not always the case. Due to insufficient data, multiple trees can sometimes be supported by the same data when analyzed using different methods. [ 17 ] | https://en.wikipedia.org/wiki/Phylogenomics |
Phylogeny in psychoanalysis is the study of the whole family or species of an organism in order to better understand the pre-history of it. [ 1 ] It might have an unconscious influence on a patient, according to Sigmund Freud . After the possibilities of ontogeny , which is the development of the whole organism viewed from the light of occurrences during the course of its life, [ 2 ] have been exhausted, phylogeny might shed more light on the pre-history of an organism.
The term phylogeny derives from the Greek terms phyle (φυλή) and phylon (φῦλον), denoting “tribe” and “race”; [ 3 ] and the term genetikos (γενετικός), denoting “relative to birth”, from genesis (γένεσις) “origin” and “birth”. [ 4 ] Phylogenetics ( / ˌ f aɪ l oʊ dʒ ɪ ˈ n ɛ t ɪ k s , - l ə -/ [ 5 ] [ 6 ] ) is the study of evolutionary relatedness among groups of organisms (e.g. species , populations ), In biology this is discovered through molecular sequencing data and morphological data matrices ( phylogenetics ), while in psychoanalysis this is discovered by analysis of the memories of a patient and the relatives. | https://en.wikipedia.org/wiki/Phylogeny_(psychoanalysis) |
Phylogeography is the study of the historical processes that may be responsible for the past to present geographic distributions of genealogical lineages. This is accomplished by considering the geographic distribution of individuals in light of genetics , particularly population genetics . [ 1 ]
This term was introduced to describe geographically structured genetic signals within and among species . An explicit focus on a species' biogeography /biogeographical past sets phylogeography apart from classical population genetics and phylogenetics . [ 2 ]
Past events that can be inferred include population expansion, population bottlenecks , vicariance , dispersal, and migration . Recently developed approaches integrating coalescent theory or the genealogical history of alleles and distributional information can more accurately address the relative roles of these different historical forces in shaping current patterns. [ 3 ]
The term phylogeography was first used by John Avise in his 1987 work Intraspecific Phylogeography: The Mitochondrial DNA Bridge Between Population Genetics and Systematics . [ 4 ] Historical biogeography is a synthetic discipline that addresses how historical, geological, climatic and ecological conditions influenced the past and current distribution of species. As part of historical biogeography, researchers had been evaluating the geographical and evolutionary relationships of organisms years before. Two developments during the 1960s and 1970s were particularly important in laying the groundwork for modern phylogeography; the first was the spread of cladistic thought, and the second was the development of plate tectonics theory . [ 5 ]
The resulting school of thought was vicariance biogeography, which explained the origin of new lineages through geological events like the drifting apart of continents or the formation of rivers. When a continuous population (or species) is divided by a new river or a new mountain range (i.e., a vicariance event), two populations (or species) are created. Paleogeography , geology and paleoecology are all important fields that supply information that is integrated into phylogeographic analyses.
Phylogeography takes a population genetics and phylogenetic perspective on biogeography . In the mid-1970s, population genetic analyses turned to mitochondrial markers. [ 6 ] The advent of the polymerase chain reaction (PCR), the process where millions of copies of a DNA segment can be replicated, was crucial in the development of phylogeography.
Thanks to this breakthrough, the information contained in mitochondrial DNA sequences was much more accessible. Advances in both laboratory methods (e.g. capillary DNA sequencing technology) that allowed easier sequencing of DNA and computational methods that make better use of the data (e.g. employing coalescent theory ) have helped improve phylogeographic inference. [ 6 ] By 2000, Avise generated a seminal review of the topic in book form, in which he defined phylogeography as the study of the "principles and processes governing the geographic distributions of genealogical lineages... within and among closely related species." [ 1 ]
Early phylogeographic work has recently been criticized for its narrative nature and lack of statistical rigor (i.e. it did not statistically test alternative hypotheses). The only real method was Alan Templeton 's Nested Clade Analysis, which made use of an inference key to determine the validity of a given process in explaining the concordance between geographic distance and genetic relatedness. Recent approaches have taken a stronger statistical approach to phylogeography than was done initially. [ 2 ] [ 7 ] [ 8 ]
Example
Climate change, such as the glaciation cycles of the past 2.4 million years, has periodically restricted some species into disjunct refugia. These restricted ranges may result in population bottlenecks that reduce genetic variation. Once a reversal in climate change allows for rapid migration out of refugial areas, these species spread rapidly into newly available habitat. A number of empirical studies find genetic signatures of both animal and plant species that support this scenario of refugia and postglacial expansion. [ 3 ] This has occurred both in the tropics (where the main effect of glaciation is increasing aridity , i.e. the expansion of savanna and retraction of tropical rainforest ) [ 9 ] [ 10 ] as well as temperate regions that were directly influenced by glaciers. [ 11 ]
Phylogeography can help in the prioritization of areas of high value for conservation. Phylogeographic analyses have also played an important role in defining evolutionary significant units (ESU), a unit of conservation below the species level that is often defined on unique geographic distribution and mitochondrial genetic patterns. [ 12 ]
A recent study on imperiled cave crayfish in the Appalachian Mountains of eastern North America [ 13 ] demonstrates how phylogenetic analyses along with geographic distribution can aid in recognizing conservation priorities. Using phylogeographical approaches, the authors found that hidden within what was thought to be a single, widely distributed species, an ancient and previously undetected species was also present. Conservation decisions can now be made to ensure that both lineages received protection. Results like this are not an uncommon outcome from phylogeographic studies.
An analysis of salamanders of the genus Eurycea , also in the Appalachians, found that the current taxonomy of the group greatly underestimated species level diversity. [ 14 ] The authors of this study also found that patterns of phylogeographic diversity were more associated with historical (rather than modern) drainage connections, indicating that major shifts in the drainage patterns of the region played an important role in the generation of diversity of these salamanders. A thorough understanding of phylogeographic structure will thus allow informed choices in prioritizing areas for conservation.
The field of comparative phylogeography seeks to explain the mechanisms responsible for the phylogenetic relationships and distribution of different species. For example, comparisons across multiple taxa can clarify the histories of biogeographical regions. [ 15 ] For example, phylogeographic analyses of terrestrial vertebrates on the Baja California peninsula [ 16 ] and marine fish on both the Pacific and gulf sides of the peninsula [ 15 ] display genetic signatures that suggest a vicariance event affected multiple taxa during the Pleistocene or Pliocene .
Phylogeography also gives an important historical perspective on community composition. History is relevant to regional and local diversity in two ways. [ 9 ] One, the size and makeup of the regional species pool results from the balance of speciation and extinction . Two, at a local level community composition is influenced by the interaction between local extinction of species’ populations and recolonization. [ 9 ] A comparative phylogenetic approach in the Australian Wet Tropics indicates that regional patterns of species distribution and diversity are largely determined by local extinctions and subsequent recolonizations corresponding to climatic cycles.
Phylogeography integrates biogeography and genetics to study in greater detail the lineal history of a species in context of the geoclimatic history of the planet. An example study of poison frogs living in the South American neotropics (illustrated to the left) is used to demonstrate how phylogeographers combine genetics and paleogeography to piece together the ecological history of organisms in their environments. Several major geoclimatic events have greatly influenced the biogeographic distribution of organisms in this area, including the isolation and reconnection of South America , the uplift of the Andes, an extensive Amazonian floodbasin system during the Miocene, the formation of Orinoco and Amazon drainages , and dry−wet climate cycles throughout the Pliocene to Pleistocene epochs. [ 17 ]
Using this contextual paleogeographic information (paleogeographic time series is shown in panels A-D) the authors of this study [ 17 ] proposed a null-hypothesis that assumes no spatial structure and two alternative hypothesis involving dispersal and other biogeographic constraints (hypothesis are shown in panels E-G, listed as SMO, SM1, and SM2). The phylogeographers visited the ranges of each frog species to obtain tissue samples for genetic analysis; researchers can also obtain tissue samples from museum collections.
The evolutionary history and relations among different poison frog species is reconstructed using phylogenetic trees derived from molecular data. The molecular trees are mapped in relation to paleogeographic history of the region for a complete phylogeographic study. The tree shown in the center of the figure has its branch lengths calibrated to a molecular clock , with the geological time bar shown at the bottom. The same phylogenetic tree is duplicated four more times to show where each lineage is distributed and is found (illustrated in the inset maps below, including Amazon basin, Andes, Guiana-Venezuela, Central America-Chocó). [ 17 ]
The combination of techniques used in this study exemplifies more generally how phylogeographic studies proceed and test for patterns of common influence. Paleogeographic data establishes geological time records for historical events that explain the branching patterns in the molecular trees. This study rejected the null model and found that the origin for all extant Amazonian poison frog species primarily stem from fourteen lineages that dispersed into their respective areas after the Miocene floodbasin receded. [ 17 ] Regionally based phylogeographic studies of this type are repeated for different species as a means of independent testing. Phylogeographers find broadly concordant and repeated patterns among species in most regions of the planet that is due to a common influence of paleoclimatic history. [ 1 ]
Phylogeography has also proven to be useful in understanding the origin and dispersal patterns of our own species, Homo sapiens . Based primarily on observations of skeletal remains of ancient human remains and estimations of their age, anthropologists proposed two competing hypotheses about human origins.
The first hypothesis is referred to as the Out-of-Africa with replacement model, which contends that the last expansion out of Africa around 100,000 years ago resulted in the modern humans displacing all previous Homo spp. populations in Eurasia that were the result of an earlier wave of emigration out of Africa. The multiregional scenario claims that individuals from the recent expansion out of Africa intermingled genetically with those human populations of more ancient African emigrations.
A phylogeographic study that uncovered a Mitochondrial Eve that lived in Africa 150,000 years ago provided early support for the Out-of-Africa model. [ 18 ]
While this study had its shortcomings, it received significant attention both within scientific circles and a wider audience. A more thorough phylogeographic analysis that used ten different genes instead of a single mitochondrial marker indicates that at least two major expansions out of Africa after the initial range extension of Homo erectus played an important role shaping the modern human gene pool and that recurrent genetic exchange is pervasive. [ 19 ] These findings strongly demonstrated Africa's central role in the evolution of modern humans, but also indicated that the multiregional model had some validity. These studies have largely been supplanted by population genomic studies that use orders of magnitude more data.
In light of these recent data from the 1000 genomes project, genomic-scale SNP databases sampling thousands of individuals globally and samples taken from two non-Homo sapiens hominins (Neanderthals and Denisovans), the picture of human evolutionary has become more resolved and complex involving possible Neanderthal and Denisovan admixture, admixture with archaic African hominins, and Eurasian expansion into the Australasian region that predates the standard out of African expansion.
Viruses are informative in understanding the dynamics of evolutionary change due to their rapid mutation rate and fast generation time. [ 20 ] Phylogeography is a useful tool in understanding the origins and distributions of different viral strains. A phylogeographic approach has been taken for many diseases that threaten human health, including dengue fever , rabies , influenza and HIV . [ 20 ] Similarly, a phylogeographic approach will likely play a key role in understanding the vectors and spread of avian influenza ( HPAI H5N1 ), demonstrating the relevance of phylogeography to the general public.
Phylogeographic analysis of ancient and modern languages has been used to test whether Indo-European languages originated in Anatolia or in the steppes of Central Asia. [ 21 ] Language evolution was modeled in terms of the gain and loss of cognate words in each language over time, to produce a cladogram of related languages. Combining those data with known geographic ranges of each language produced strong support for an Anatolian origin approximately 8000–9500 years ago. | https://en.wikipedia.org/wiki/Phylogeography |
Phylomedicine is an emerging discipline at the intersection of medicine , genomics , and evolution . It focuses on the use of evolutionary knowledge to predict functional consequences of mutations found in personal genomes and populations. [ 1 ] [ 2 ]
Modern technologies have made genome sequencing accessible, and biomedical scientists have profiled genomic variation in apparently healthy individuals and individuals diagnosed with a variety of diseases. This work has led to the discovery of thousands of disease-associated genes and genetic variants, elucidating a more robust picture of the amount and types of variations found within and between humans. [ 3 ] [ 4 ]
Proteins are encoded in genomic DNA by exons , and these comprise only ~1% of the human genomic sequence (aka the exome ). The exome of an individual carries about 6,000–10,000 amino-acid -altering nSNVs , and many of these variants are already known to be associated with more than 1000 diseases. [ 5 ] Although only a small fraction of these personal variants are likely to impact health, the sheer volume of known genomic and exomic variants is too large to apply traditional laboratory or experimental techniques to explore their functional consequences. Translating a personal genome into useful phenotypic information (e.g. relating to predisposition to disease, differential drug response, or other health concerns), is therefore a grand challenge in the field of genomic medicine .
Fortunately, results from the natural experiment of molecular evolution are recorded in the genomes of humans and other living species. All genomic variation is subjected to the process of natural selection which generally reduces mutations with negative effects on phenotype over time. With the availability of a large number of genomes from the tree of life, evolutionary conservation of individual genomic positions and the sets of mutations permitted among species informs the functional and health consequences of these mutations.
Consequently, phylomedicine has emerged as an important discipline at the intersection of molecular evolution and genomic medicine with a focus on understanding the inherited component of human disease and health. Examples include studies of retinal disease, auditory diseases, and common diseases more generally. [ 6 ] [ 7 ] [ 8 ] Phylomedicine expands the purview of contemporary evolutionary medicine to use evolutionary patterns beyond short-term history (e.g. populations within a species) to the long-term evolutionary history of multispecies genomics . | https://en.wikipedia.org/wiki/Phylomedicine |
Phyloscan [ 1 ] [ 2 ] is a web service for DNA sequence analysis that is free and open to all users (without login requirement). For locating matches to a user-specified sequence motif for a regulatory binding site , Phyloscan provides a statistically sensitive scan of user-supplied mixed aligned and unaligned DNA sequence data. Phyloscan's strength is that it brings together | https://en.wikipedia.org/wiki/Phyloscan |
Phylostratum is a set of genes from an organism that coalesce to founder genes having common phylogenetic origin. [ 1 ] [ 2 ] [ 3 ] This term was coined by Domazet and Tautz to describe the gene origination. [ 3 ]
This genetics article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Phylostratum |
In the field of microbiome research, a group of species is said to show a phylosymbiotic signal if the degree of similarity between the species' microbiomes recapitulates to a significant extent their evolutionary history . [ 1 ] In other words, a phylosymbiotic signal among a group of species is evident if their microbiome similarity dendrogram could prove to have significant similarities with their host's phylogenic tree. For the analysis of the phylosymbiotic signal to be reliable, environmental differences
that could shape the host microbiome should be either eliminated or accounted for.
One plausible mechanistic explanation for such phenomena could be, for example, a result of host immune genes that rapidly evolve in a continuous arms race with members of its microbiome.
Across the animal kingdom there are many notable examples of phylosymbiosis. For instance, in non-human primates it was found that host evolutionary history had a substantially greater influence on the gut microbiome than either host dietary niche or geographic location. [ 2 ] It was speculated that changes in gut physiology within the evolutionary history of non-human primates was the primary reason. This finding was particularly interesting as it contradicted previous research which reported that dietary niche was a strong factor in determining the gut microbiome of mammals. [ 3 ] [ 4 ] [ 5 ]
Plant kingdoms has shown strong relationships of phylosymbiosis with notable examples in Malus [ 6 ] (apple family) and Poaceae . [ 7 ] (grass family) species where endophytic communities mirror host evolutionary relationships. During plant domestication, three scenarios of phylosymbiotic patterns have been observed: reduction in microbial diversity, increased diversity through hybridization, or maintenance of existing diversity levels. In the Malus species case, including wild and domesticated cultivars, Malus harbored endophytic communities that corresponded to their phylogenetic relationship. [ 6 ]
The concept of phylosymbiosis extends beyond simple correlation between host phylogeny and microbiome composition, encompassing deeper evolutionary implications. [ 8 ] One crucial aspect is the distinction between phylosymbiosis and coevolution - while phylosymbiosis describes a pattern of association between host evolutionary relationships and microbiome communities, it does not necessarily imply coevolution between hosts and their microbes. [ 9 ] This distinction is particularly important because phylosymbiotic patterns can arise through various ecological and evolutionary processes, not all of which involve direct coevolutionary relationships. [ 10 ]
Vellend's four principles - selection, drift, speciation , and dispersal provide a comprehensive framework for understanding how phylosymbiotic patterns emerge in host-microbiome relationship [ 11 ] s. Selection impacts through host physiology and immune responses shapes microbial communities, while drift influences random population changes, speciation leads to host-specific adaptations, and dispersal affects microbiome transmission between hosts. These principles help explain why phylosymbiosis can exist without strict coevolution, as the observed patterns may result from any combination of these ecological processes rather than requiring direct evolutionary relationships between hosts and their microbiomes. Recent research has revealed that phylosymbiosis can be disrupted by various factors, including host diet, environmental changes, and disease states. [ 12 ] This disruption provides valuable insights into the stability and resilience of host-microbe associations. For instance, studies in Nasonia wasps have demonstrated that hybrid hosts often show broken phylosymbiotic patterns, suggesting that genetic incompatibilities between hosts and microbes can emerge during speciation. [ 13 ] This observation has led to the hypothesis that phylosymbiosis might contribute to speciation through microbiome-mediated reproductive isolation. [ 14 ]
The implications of phylosymbiosis for host health and adaptation are particularly relevant in the context of global change . [ 15 ] As species face novel environmental challenges, understanding how phylosymbiotic relationships influence host adaptation becomes increasingly important. [ 16 ] Research has shown that microbiomes can facilitate host adaptation to new environments, and the strength of phylosymbiotic relationships may influence this adaptive potential. [ 17 ] This has led to growing interest in using phylosymbiotic principles to inform conservation strategies and predict species' responses to environmental change.
Methodologically, the study of phylosymbiosis has been revolutionized by advances in high-throughput sequencing technologies and bioinformatics tools. [ 18 ] Modern analyses often employ sophisticated statistical approaches to account for the compositional nature of microbiome data and the complex hierarchical structure of host-microbe associations. [ 19 ] These methods include techniques such as distance-based redundancy analysis, structural equation modeling, and phylogenetic generalized linear mixed models, which help researchers disentangle the various factors contributing to phylosymbiotic patterns. [ 20 ]
In agricultural contexts, phylosymbiosis has emerged as a valuable framework for crop improvement and pest management. [ 21 ] Understanding phylosymbiotic patterns in crop species can guide strategies for enhancing plant resistance to pathogens and improving nutrient uptake efficiency. [ 22 ] Similarly, in livestock management, phylosymbiotic insights are being used to optimize animal health through microbiome manipulation, taking into account the evolutionary relationships between host species and their associated microbial communities. [ 23 ]
Research into phylosymbiotic relationships has revealed that human genetic ancestry influences microbiome composition, which in turn affects drug metabolism and therapeutic outcomes. [ 24 ] This understanding has led to: | https://en.wikipedia.org/wiki/Phylosymbiosis |
Physalaemin is a tachykinin peptide obtained from the Physalaemus frog , closely related to substance P . Its structure was first elucidated in 1964. [ 1 ] [ 2 ]
Like all tachykinins, physalaemin is a sialagogue (increases salivation ) and a potent vasodilator with hypotensive effects. [ 3 ]
Physalaemin (PHY) is known to take on both a linear and helical three dimensional structure. Grace et al. (2010) have shown that in aqueous environments, PHY preferentially takes on the linear conformation whereas in an environment that simulates a cellular membrane, PHY takes on a helical confirmation from the Pro 4 residue to the C-Terminus. This helical conformation is essential to allow the binding of PHY to neurokinin-1 (NK1) receptors. Consensus sequences between Substance P (a mammalian tachykinin and agonist of NK1) and PHY have been used to confirm that the helical confirmation is necessary for PHY to bind to NK1. [ 4 ]
Not only is PHY closely related to Substance P (SP), but it also has a higher affinity for the mammalian neurokinin receptors that Substance P can bind to. Researchers can make use of this behavior of PHY to study the behavior of smooth muscle - a tissue where NK1 can be found. Shiina et al. (2010) used PHY to show that tachykinins as a whole can cause the longitudinal contraction of smooth muscle tissue in esophageal tissue. [ 5 ]
Singh et Maji made use of PHY's similarity to SP along with its sequence similarity to Amyloid B-peptide 25-35 [AB(25-35)]. Despite its sequence similarity to SP, Singh et Maji showed that PHY had distinct amyloid forming capabilities . Under artificially elevated concentrations of tetrafluoroethylene (TFE) and a short incubation time, PHY was able to form amyloid fibrils. These fibrils originating from tackynins like PHY were also shown to reduce the neurotoxicity of other Amyloid fibers associated with amyloid induced diseases such as Alzheimer's disease. [ 6 ]
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Physalaemin |
Physica Status Solidi , often stylized physica status solidi or pss , [ 1 ] is a family of international peer-reviewed , scientific journals , publishing research on all aspects of solid state physics , and materials science . It is owned and published by Wiley–VCH . These journals publish over 2000 articles per year, [ 2 ] making it one of the largest international publications in condensed matter physics . The current editor in chief is Stefan Hildebrandt at the Editorial Office based in Berlin . [ 3 ] This office also manages the peer-review process.
Physica Status Solidi was founded by Karl Wolfgang Böer (then at Humboldt University of Berlin ) in East Berlin and published its first issue in July 1961. Shortly after the journal was founded, the construction of the Berlin Wall in August 1961 exacerbated the distances between scientists from the Eastern and Western blocs . Throughout the Cold War Physica Status Solidi maintained political independence and English as publication language and, as such, it became a major platform for the scientists behind the Iron Curtain to disseminate their results in the Western world (and vice versa) and thus a forum of international exchange for scientists from the East and the West. [ 4 ]
In 1970 the journal was divided into series A ( Applications and Materials Science ) and B ( Basic Solid State Physics ). [ 5 ] [ 6 ] [ 7 ] In 2003, series C ( Current Topics in Solid State Physics ) was created to accommodate the publication of conference proceedings . [ 8 ]
Following the reunification of Germany in 1990 the journal's original publisher Akademie Verlag became part of the VCH Publishing group, which again was merged into John Wiley & Sons , leading to the formation of Wiley-VCH Verlag in 1997.
A fourth series, RRL ( Rapid Research Letters ), was launched in 2007 to publish short articles of broader and immediate interest to the solid state physics and materials science community. Publication times are typically two weeks from submission to online publication. [ 9 ] | https://en.wikipedia.org/wiki/Physica_Status_Solidi |
Physica speculatio [ 1 ] is a text of scientific character written by Alonso de la Vera Cruz in 1557 in the capital of New Spain . It was the first published work in the American continent that specifically addressed the study of physics, and was written to teach the students of the Real University of Mexico.
It introduced the main theoretical concepts of geocentric astronomy and references the heliocentric model . [ 2 ]
Fray Alonso de la Vera Cruz published in the capital of New Spain a Course of Arts , constituted in three volumes in Latin. The first form in 1553 under the title of Recognitio Summularum , that had like purpose help to the students of the Real University of Mexico to understand the philosophy by means of the understanding of the formal logic. A year afterwards appeared the second called Dialectica Resolutio , that was a continuation of the previous. The last was Physica speculatio .
They did four editions, the last 3 of which were for use of the salmantino students and were abbreviated versions of the Mexican one.
The Physica speculatio has by object the study or "investigation" -speculatio- and the exhibition, in general, of subjects of physics on the nature -Physica-, treated by fray Alonso de la Vera Cruz basically from the philosophical perspective, characteristic of Aristotle and traditional in the Half Age.
It talks about, in what can be considered like the first part, the subjects treated by Aristotle in the Eight books of physics , as they are the essence of the physical or natural being, the movement and the infinite, the extension, the continuous, the space, the time, the first engine, etc. The second part treats of the subjects of the generation and the corruption of the living beings, of the mixed and composed being, of the primary qualities and of the elements and their properties. In the third part it exposes the doctrines on the meteors, it talks about the stars and their influence on humans, of the three regions of the air or atmosphere, of the comets, of the tides, of the ray and of a lot of other atmospheric phenomena. The fourth part devotes fray Alonso to comment the books De Anima by Aristotle. To end the Physica speculatio , there are some reflections on the treatise De Caelo by Aristotle. [ 3 ]
It consists of 400 pages in paper, in which there are two columns that form 900 sheets in current transcription and nearly 1200 in translation to the Spanish.
It contains the following writings:
Titled exactly like the Aristotelian works.
It contains added as appendix the Tractatus de Sphera written by the Italian mathematician and astronomer Campanus of Novara in the 13th century, and printed for the first time in 1518.
The main divisions in Books are in general the ones of the corresponding works of Aristotle.
Each book is divided in Speculations (particular studies), that can be understood as chapters.
The writing is presented according to the scholastic method , proposing first the opinions or negative affirmations, contrary to the thesis that it will sustain, and afterwards the positive, with foundations and explanations. | https://en.wikipedia.org/wiki/Physica_speculatio |
In computing . Physical-to-Virtual (" P2V " or " p-to-v " [ 1 ] ) involves the process of decoupling and migrating a physical server 's operating system (OS), applications, and data from that physical server to a virtual-machine guest hosted on a virtualized platform .
User manually creates a virtual machine in a virtual host environment and copies all the files from OS, applications and data from the source machine.
Performing a P2V migration using a tool that assists the user in moving the servers from physical state to virtual machine.
Performing a P2V migration using a tool that migrates the server over the network without any assistance from the user.
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Physical-to-Virtual |
Physical Chemistry Chemical Physics is a weekly peer-reviewed scientific journal publishing research and review articles on any aspect of physical chemistry , chemical physics , and biophysical chemistry . It is published by the Royal Society of Chemistry on behalf of eighteen participating societies. The editor-in-chief is Anouk Rijs, ( Vrije Universiteit Amsterdam ). [ 1 ]
The journal was established in 1999 as the results of a merger between Faraday Transactions and a number of other physical chemistry journals published by different societies.
The journal is run by an Ownership Board, on which all the member societies have equal representation. The eighteen participating societies are:
The journal publishes the following types of articles
The journal is abstracted and indexed in:
According to the Journal Citation Reports , the journal has a 2021 impact factor of 3.945. [ 2 ] | https://en.wikipedia.org/wiki/Physical_Chemistry_Chemical_Physics |
In transportation, the Physical Internet refers to the combination of digital transportation networks that are deploying to replace actual road networks.
The Physical Internet Initiative promoted research efforts around 2011.
Since around 2018, the initiative site refers to a blog site promoting the marketing term big data . [ 1 ]
In logistics , the Physical Internet is an open global logistics system founded on physical, digital, and operational interconnectivity, through encapsulation , interfaces and protocols . [ 2 ] The Physical Internet is intended to replace current logistical models. [ 3 ] [ 4 ]
Benoit Montreuil organized a project called the Physical Internet Initiative at the Université Laval in Canada around 2011. [ 2 ] It applied concepts from internet data transfer to real-world shipping processes. [ 3 ] [ 5 ] A project had funding from the National Science Foundation as well as contributions from MHIA and CICMHE. [ 6 ]
The Internet does not transmit information: it transmits packets with embedded information. These packets are designed for ease of use in the Digital Internet. The information within a packet is encapsulated and is not dealt with by Internet. The packet header contains all information required for identifying the packet and routing it correct to destination. A packet is constructed for a specific transmission and it is dismantled once it has reached its destination. The Digital Internet is based on a protocol structuring data packets independently from equipment. In this way, data packets can be processed by different systems and through various networks: modems, copper wires, fiber optic wires, routers, etc.; local area networks, wide area networks, etc.; Intranets, Extranets, Virtual Private Networks, etc. [ 7 ]
The Physical Internet does not manipulate physical goods directly, whether they are materials, parts, merchandises or yet products. It manipulates exclusively containers that are explicitly designed for the Physical Internet and that encapsulate physical goods within them. [ 7 ]
The vision of the Physical Internet involves encapsulating goods in smart, ecofriendly and modular containers ranging from the size of a maritime container to the size of a small box. It thus generalizes the maritime container that succeeded to support globalization and shaped ships and ports, and extends containerization to logistics services in general. The Physical Internet moves the border of the private space to be inside of the container instead of the warehouse or the truck. These modular containers will be continuously monitored and routed, exploiting their digital interconnection through the Internet of Things .
The Physical Internet encapsulates physical objects in physical packets or containers, hereafter termed π-containers so as to differentiate them from current containers. These π-containers are world-standard, smart, green and modular containers. They are notably modularized and standardized worldwide in terms of dimensions, functions and fixtures. [ 7 ]
The π-containers are key elements enabling the interoperability necessary for the adequate functioning of the Physical Internet. They must be designed to facilitate their handling and storage in the physical nodes of the Physical Internet, as well as their transport between these nodes and of course to protect goods. They act as packets in the digital Internet. They have an information part analogous to the header in the digital Internet. The π-containers encapsulate their content, making the contents irrelevant to the Physical Internet. [ 8 ]
From a physical perspective, π-containers must be easy to handle, store, transport, seal, snap to a structure, interlock, load, unload, build and dismantle.
From an informational perspective, each π-container has a unique worldwide identifier, such as the MAC address in the Ethernet network and the digital Internet. This identifier is attached to each π-container both physically and digitally for ensuring identification robustness and efficiency. A smart tag is attached to each π-container to act as its representing agent. It contributes to ensuring π-container identification, integrity, routing, conditioning, monitoring, traceability and security through the Physical Internet. Such smart tagging enables the distributed automation of a wide variety of handling, storage and routing operations. In order to deal adequately with privacy and competitiveness concerns within the Physical Internet, the smart tag of a π-container strictly restricts information access by pertinent parties. Only the information necessary for the routing of π-containers through the Physical Internet are accessible for everyone. [ 8 ]
A number of academic research projects were funded using this term.
The European Commission funded a project called Modular Logistics Units in Shared Co-modal Networks (Modulushca) from 1 October 2012 to 31 January 2016. [ 9 ] Modulushca studied interconnected logistics at the European level, in coordination with North American partners and the international Physical Internet Initiative.
The project studied interconnected logistics based on containerization for fast-moving consumer goods (FMCG) supply chains. [ 10 ]
A European Union project called New ICT infrastructure and reference architecture to support Operations in future PI Logistics NETworks (ICONET) explored PI network services that optimise cargo flows against throughput, cost and environmental performance. [ 11 ]
ICONET's main research focus was collaborative planning of flexible logistic chains, by applying some popular computer network concepts of the time.
ICONET was funded by the Innovation and Networks Executive Agency in Brussels from 1 September 2018 to 28 February 2021. [ 11 ]
Establishing the logistics system gain efficiency of the Physical Internet
was the focus of a research project funded by the U.S. National Science Foundation (NSF) and conducted in the Center for Excellence in Logistics and Distribution (CELDi).
The first phase report was published in 2012. [ 12 ]
The Fast Track to the Physical Internet (Atropine) project promised to demonstrate a Physical Internet region in Upper Austria.
The project is managed by the Logistikum of the University of Applied Sciences Upper Austria from December 2015 to May 2018. It was funded by the Upper Austrian government program 'Innovatives Oberösterreich 2020'. [ 13 ] Another research project in Austria was called Go2PI. [ 14 ]
The Alliance for Logistics Innovation through Collaboration in Europe (ALICE) supported some projects from 2018 to 2020. [ 15 ]
The aim of the publicly funded project is to develop a standardized interface for vehicle as edge devices that implements the concepts of the GAIA-X initiative and the dynamic reconfiguration of vehicles with the example of the Road-Based Physical Internet (RBPI). [ 16 ] The necessary measures to be carried out on vehicles for this purpose mean that the RBPI can be expected to be partially implemented by 2023. [ 17 ]
Also car manufacturer do research on the road-based Physical Internet. [ 18 ] OEM, in particular Mercedes-Benz, work on architectures to integrate vehicles as edge devices into the Physical Internet ecosystem. [ 19 ] The aim here is to create a cross-manufacturer standard to optimize the utilization of vehicle cargo spaces. The ecosystem of cloud storage, vehicles and freight components is called the Road-Based Physical Internet (RBPI). [ 20 ] | https://en.wikipedia.org/wiki/Physical_Internet |
The Physical and Theoretical Chemistry Laboratory ( PTCL ) is a major chemistry laboratory at the University of Oxford , England . It is located in the main Science Area of the university on South Parks Road . Previously it was known as the Physical Chemistry Laboratory . [ 1 ]
The original Physical Chemistry Laboratory was built in 1941 [ 2 ] and at that time also housed the inorganic chemistry laboratory. It replaced the Balliol-Trinity Laboratories . [ 3 ] The east wing of the building was completed in 1959 and inorganic chemistry, already in its own building on South Parks Road , then became a separate department in 1961. In 1972, the Department of Theoretical Chemistry was established in a house on South Parks Road, and in 1994, the amalgamation of the physical and theoretical chemistry departments took place. This was followed shortly by the theoretical group moving into the PTCL annexe in 1995.
The university is [ when? ] in the early planning stages of the demolition of the PTCL building, to be replaced by a second chemistry research laboratory. [ 4 ]
The following Oxford Physical and Theoretical chemists are of note:
This article about a chemistry organization is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Physical_and_Theoretical_Chemistry_Laboratory_(Oxford) |
In quantum computing , a qubit is a unit of information analogous to a bit (binary digit) in classical computing , but it is affected by quantum mechanical properties such as superposition and entanglement which allow qubits to be in some ways more powerful than classical bits for some tasks . Qubits are used in quantum circuits and quantum algorithms composed of quantum logic gates to solve computational problems , where they are used for input/output and intermediate computations.
A physical qubit is a physical device that behaves as a two-state quantum system , used as a component of a computer system . [ 1 ] [ 2 ] A logical qubit is a physical or abstract qubit that performs as specified in a quantum algorithm or quantum circuit [ 3 ] subject to unitary transformations , has a long enough coherence time to be usable by quantum logic gates (cf. propagation delay for classical logic gates). [ 1 ] [ 4 ] [ 5 ]
Since the development of the first quantum computer in 1998, most technologies used to implement qubits face issues of stability, decoherence , [ 6 ] [ 7 ] fault tolerance [ 8 ] [ 9 ] and scalability . [ 6 ] [ 9 ] [ 10 ] Because of this, many physical qubits are needed for the purposes of error-correction to produce an entity which behaves logically as a single qubit would in a quantum circuit or algorithm; this is the subject of quantum error correction . [ 3 ] [ 11 ] Thus, contemporary logical qubits typically consist of many physical qubits to provide stability, error-correction and fault tolerance needed to perform useful computations. [ 1 ] [ 7 ] [ 11 ]
In 2023, Google researchers showed how quantum error correction can improve logical qubit performance by increasing the physical qubit count. [ 12 ] These results found that a larger logical qubit (49 physical qubits) had a lower error rate, about 2.9 percent per round of error correction, compared to a rate of about 3.0 percent for the smaller logical qubit (17 physical qubits). [ 13 ]
In 2024, IBM researchers created a quantum error correction code 10 times more efficient than previous research, protecting 12 logical qubits for roughly a million cycles of error checks using 288 qubits. [ 14 ] [ 15 ] The work demonstrates error correction on near-term devices while reducing overhead – the number of physical qubits required to keep errors low. [ 16 ]
In 2024, Microsoft and Quantinuum announced experimental results that showed logical qubits could be created with significantly fewer physical qubits. [ 17 ] The team used quantum error correction techniques developed by Microsoft and Quantinuum's trapped ion hardware to use 30 physical qubits to form four logical qubits. Scientists used a qubit virtualization system and active syndrome extraction—also called repeated error correction to accomplish this. [ 18 ] This work defines how to achieve logical qubits within quantum computation. [ 19 ]
1-bit and 2-bit quantum gate operations have been shown to be universal. [ 20 ] [ 21 ] [ 22 ] [ 23 ] A quantum algorithm can be instantiated as a quantum circuit . [ 24 ] [ 25 ]
A logical qubit specifies how a single qubit should behave in a quantum algorithm, subject to quantum logic operations which can be built out of quantum logic gates. However, issues in current technologies preclude single two-state quantum systems , which can be used as physical qubits, from reliably encoding and retaining this information for long enough to be useful. Therefore, current attempts to produce scalable quantum computers require quantum error correction, and multiple (currently many) physical qubits must be used to create a single, error-tolerant logical qubit. Depending on the error-correction scheme used, and the error rates of each physical qubit, a single logical qubit could be formed of up to 1,000 physical qubits. [ 26 ]
The approach of topological qubits , which takes advantage of topological effects in quantum mechanics , has been proposed as needing many fewer or even a single physical qubit per logical qubit. [ 10 ] Topological qubits rely on a class of particles called anyons which have spin that is neither half-integral ( fermions ) nor integral ( bosons ), and therefore obey neither the Fermi–Dirac statistics nor the Bose–Einstein statistics of particle behavior. [ 27 ] Anyons exhibit braid symmetry in their world lines , which has desirable properties for the stability of qubits. Notably, anyons must exist in systems constrained to two spatial dimensions or fewer, according to the spin–statistics theorem , which states that in 3 or more spatial dimensions, only fermions and bosons are possible. [ 27 ] In 2025, researchers made progress in topological quantum computing by successfully measuring the state of special particles called Majorana zero modes in a single step. [ 28 ] | https://en.wikipedia.org/wiki/Physical_and_logical_qubits |
Physical biochemistry is a branch of biochemistry that deals with the theory, techniques, and methodology used to study the physical chemistry of biomolecules . [ 1 ] It also deals with the mathematical approaches for the analysis of biochemical reaction and the modelling of biological systems . It provides insight into the structure of macromolecules, and how chemical structure influences the physical properties of a biological substance. [ 2 ]
It involves the use of physics , physical chemistry principles, and methodology to study biological systems. [ 3 ] It employs various physical chemistry techniques such as chromatography , spectroscopy , Electrophoresis , X-ray crystallography , electron microscopy , and hydrodynamics . [ 4 ]
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Physical_biochemistry |
Physical chemistry is the study of macroscopic and microscopic phenomena in chemical systems in terms of the principles, practices, and concepts of physics such as motion , energy , force , time , thermodynamics , quantum chemistry , statistical mechanics , analytical dynamics and chemical equilibria .
Physical chemistry, in contrast to chemical physics , is predominantly (but not always) a supra-molecular science, as the majority of the principles on which it was founded relate to the bulk rather than the molecular or atomic structure alone (for example, chemical equilibrium and colloids ).
Some of the relationships that physical chemistry strives to understand include the effects of:
The key concepts of physical chemistry are the ways in which pure physics is applied to chemical problems.
One of the key concepts in classical chemistry is that all chemical compounds can be described as groups of atoms bonded together and chemical reactions can be described as the making and breaking of those bonds. Predicting the properties of chemical compounds from a description of atoms and how they bond is one of the major goals of physical chemistry. To describe the atoms and bonds precisely, it is necessary to know both where the nuclei of the atoms are, and how electrons are distributed around them. [ 2 ]
Quantum chemistry , a subfield of physical chemistry especially concerned with the application of quantum mechanics to chemical problems, provides tools to determine how strong and what shape bonds are, [ 2 ] how nuclei move, and how light can be absorbed or emitted by a chemical compound. [ 3 ] Spectroscopy is the related sub-discipline of physical chemistry which is specifically concerned with the interaction of electromagnetic radiation with matter.
Another set of important questions in chemistry concerns what kind of reactions can happen spontaneously and which properties are possible for a given chemical mixture. This is studied in chemical thermodynamics , which sets limits on quantities like how far a reaction can proceed, or how much energy can be converted into work in an internal combustion engine , and which provides links between properties like the thermal expansion coefficient and rate of change of entropy with pressure for a gas or a liquid . [ 4 ] It can frequently be used to assess whether a reactor or engine design is feasible, or to check the validity of experimental data. To a limited extent, quasi-equilibrium and non-equilibrium thermodynamics can describe irreversible changes. [ 5 ] However, classical thermodynamics is mostly concerned with systems in equilibrium and reversible changes and not what actually does happen, or how fast, away from equilibrium.
Which reactions do occur and how fast is the subject of chemical kinetics , another branch of physical chemistry. A key idea in chemical kinetics is that for reactants to react and form products , most chemical species must go through transition states which are higher in energy than either the reactants or the products and serve as a barrier to reaction. [ 6 ] In general, the higher the barrier, the slower the reaction. A second is that most chemical reactions occur as a sequence of elementary reactions , [ 7 ] each with its own transition state. Key questions in kinetics include how the rate of reaction depends on temperature and on the concentrations of reactants and catalysts in the reaction mixture, as well as how catalysts and reaction conditions can be engineered to optimize the reaction rate.
The fact that how fast reactions occur can often be specified with just a few concentrations and a temperature, instead of needing to know all the positions and speeds of every molecule in a mixture, is a special case of another key concept in physical chemistry, which is that to the extent an engineer needs to know, everything going on in a mixture of very large numbers (perhaps of the order of the Avogadro constant , 6 x 10 23 ) of particles can often be described by just a few variables like pressure, temperature, and concentration. The precise reasons for this are described in statistical mechanics , [ 8 ] a specialty within physical chemistry which is also shared with physics. Statistical mechanics also provides ways to predict the properties we see in everyday life from molecular properties without relying on empirical correlations based on chemical similarities. [ 5 ]
The term "physical chemistry" was coined by Mikhail Lomonosov in 1752, when he presented a lecture course entitled "A Course in True Physical Chemistry" ( Russian : Курс истинной физической химии ) before the students of Petersburg University . [ 9 ] In the preamble to these lectures he gives the definition: "Physical chemistry is the science that must explain under provisions of physical experiments the reason for what is happening in complex bodies through chemical operations".
Modern physical chemistry originated in the 1860s to 1880s with work on chemical thermodynamics , electrolytes in solutions, chemical kinetics and other subjects. One milestone was the publication in 1876 by Josiah Willard Gibbs of his paper, On the Equilibrium of Heterogeneous Substances . This paper introduced several of the cornerstones of physical chemistry, such as Gibbs energy , chemical potentials , and Gibbs' phase rule . [ 10 ]
The first scientific journal specifically in the field of physical chemistry was the German journal, Zeitschrift für Physikalische Chemie , founded in 1887 by Wilhelm Ostwald and Jacobus Henricus van 't Hoff . Together with Svante August Arrhenius , [ 11 ] these were the leading figures in physical chemistry in the late 19th century and early 20th century. All three were awarded the Nobel Prize in Chemistry between 1901 and 1909.
Developments in the following decades include the application of statistical mechanics to chemical systems and work on colloids and surface chemistry , where Irving Langmuir made many contributions. Another important step was the development of quantum mechanics into quantum chemistry from the 1930s, where Linus Pauling was one of the leading names. Theoretical developments have gone hand in hand with developments in experimental methods, where the use of different forms of spectroscopy , such as infrared spectroscopy , microwave spectroscopy , electron paramagnetic resonance and nuclear magnetic resonance spectroscopy , is probably the most important 20th century development.
Further development in physical chemistry may be attributed to discoveries in nuclear chemistry , especially in isotope separation (before and during World War II), more recent discoveries in astrochemistry , [ 12 ] as well as the development of calculation algorithms in the field of "additive physicochemical properties" (practically all physicochemical properties, such as boiling point, critical point, surface tension, vapor pressure, etc.—more than 20 in all—can be precisely calculated from chemical structure alone, even if the chemical molecule remains unsynthesized), [ citation needed ] and herein lies the practical importance of contemporary physical chemistry.
See Group contribution method , Lydersen method , Joback method , Benson group increment theory , quantitative structure–activity relationship
Some journals that deal with physical chemistry include
Historical journals that covered both chemistry and physics include Annales de chimie et de physique (started in 1789, published under the name given here from 1815 to 1914). | https://en.wikipedia.org/wiki/Physical_chemistry |
Physical coefficient is an important number that characterizes some physical property of a technical or scientific object under specified conditions. [ 1 ] A coefficient also has a scientific reference which is the reliance on force.
To find the coefficient of a chemical compound , you must balance the elements involved in it. For example, water:
H 2 O.
It just so happens that hydrogen (H) and oxygen (O) are both diatomic molecules, thus we have H 2 and O 2 . To form water, one of the O atoms breaks off from the O 2 molecule and react with the H 2 compound to form H 2 O. But, there is one oxygen atom left. It reacts with another H 2 molecule. Since it took two of each atom to balance the compound, we put the coefficient 2 in front of H 2 O:
2 H 2 O.
The total reaction is thus 2 H 2 + O 2 → 2 H 2 O. [ 2 ]
This physics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Physical_coefficient |
Physical computing involves interactive systems that can sense and respond to the world around them. [ clarification needed ] While this definition is broad enough to encompass systems such as smart automotive traffic control systems or factory automation processes, it is not commonly used to describe them. In a broader sense, physical computing is a creative framework for understanding human beings' relationship to the digital world. In practical use, the term most often describes handmade art, design or DIY hobby projects that use sensors and microcontrollers to translate analog input to a software system , and/or control electro-mechanical devices such as motors , servos , lighting or other hardware.
Physical computing intersects the range of activities often referred to in academia and industry as electrical engineering, mechatronics, robotics, computer science, and especially embedded development.
Physical computing is used in a wide variety of domains and applications.
The advantage of physicality in education and playfulness has been reflected in diverse informal learning environments. The Exploratorium , a pioneer in inquiry based learning , developed some of the earliest interactive exhibitry involving computers, and continues to include more and more examples of physical computing and tangible interfaces as associated technologies progress.
In the art world, projects that implement physical computing include the work of Scott Snibbe , Daniel Rozin , Rafael Lozano-Hemmer , Jonah Brucker-Cohen , and Camille Utterback .
Physical computing practices also exist in the product and interaction design sphere, where hand-built embedded systems are sometimes used to rapidly prototype new digital product concepts in a cost-efficient way. Firms such as IDEO and Teague are known to approach product design in this way.
Commercial implementations range from consumer devices such as the Sony Eyetoy or games such as Dance Dance Revolution to more esoteric and pragmatic uses including machine vision utilized in the automation of quality inspection along a factory assembly line . Exergaming , such as Nintendo's Wii Fit , can be considered a form of physical computing. Other implementations of physical computing include voice recognition , which senses and interprets sound waves via microphones or other soundwave sensing devices, and computer vision , which applies algorithms to a rich stream of video data typically sensed by some form of camera. Haptic interfaces are also an example of physical computing, though in this case the computer is generating the physical stimulus as opposed to sensing it. Both motion capture and gesture recognition are fields that rely on computer vision to work their magic.
Physical computing can also describe the fabrication and use of custom sensors or collectors for scientific experiments, though the term is rarely used to describe them as such. An example of physical computing modeling is the Illustris project , which attempts to precisely simulate the evolution of the universe from the Big Bang to the present day, 13.8 billion years later. [ 1 ] [ 2 ]
Prototyping plays an important role in Physical computing. Tools like the Wiring , Arduino and Fritzing as well as I-CubeX help designers and artists to quickly prototype their interactive concepts. | https://en.wikipedia.org/wiki/Physical_computing |
Physical cosmology is a branch of cosmology concerned with the study of cosmological models. A cosmological model , or simply cosmology , provides a description of the largest-scale structures and dynamics of the universe and allows study of fundamental questions about its origin , structure, evolution , and ultimate fate . [ 1 ] Cosmology as a science originated with the Copernican principle , which implies that celestial bodies obey identical physical laws to those on Earth, and Newtonian mechanics , which first allowed those physical laws to be understood.
Physical cosmology, as it is now understood, began in 1915 with the development of Albert Einstein 's general theory of relativity , followed by major observational discoveries in the 1920s: first, Edwin Hubble discovered that the universe contains a huge number of external galaxies beyond the Milky Way ; then, work by Vesto Slipher and others showed that the universe is expanding . These advances made it possible to speculate about the origin of the universe, and allowed the establishment of the Big Bang theory, by Georges Lemaître , as the leading cosmological model. A few researchers still advocate a handful of alternative cosmologies ; [ 2 ] however, most cosmologists agree that the Big Bang theory best explains the observations.
Dramatic advances in observational cosmology since the 1990s, including the cosmic microwave background , distant supernovae and galaxy redshift surveys , have led to the development of a standard model of cosmology . This model requires the universe to contain large amounts of dark matter and dark energy whose nature is currently not well understood, but the model gives detailed predictions that are in excellent agreement with many diverse observations. [ 3 ]
Cosmology draws heavily on the work of many disparate areas of research in theoretical and applied physics . Areas relevant to cosmology include particle physics experiments and theory , theoretical and observational astrophysics , general relativity, quantum mechanics , and plasma physics .
Modern cosmology developed along tandem tracks of theory and observation. In 1916, Albert Einstein published his theory of general relativity , which provided a unified description of gravity as a geometric property of space and time. [ 4 ] At the time, Einstein believed in a static universe , but found that his original formulation of the theory did not permit it. [ 5 ] This is because masses distributed throughout the universe gravitationally attract, and move toward each other over time. [ 6 ] However, he realized that his equations permitted the introduction of a constant term which could counteract the attractive force of gravity on the cosmic scale. Einstein published his first paper on relativistic cosmology in 1917, in which he added this cosmological constant to his field equations in order to force them to model a static universe. [ 7 ] The Einstein model describes a static universe; space is finite and unbounded (analogous to the surface of a sphere, which has a finite area but no edges). However, this so-called Einstein model is unstable to small perturbations—it will eventually start to expand or contract. [ 5 ] It was later realized that Einstein's model was just one of a larger set of possibilities, all of which were consistent with general relativity and the cosmological principle . The cosmological solutions of general relativity were found by Alexander Friedmann in the early 1920s. [ 8 ] His equations describe the Friedmann–Lemaître–Robertson–Walker universe, which may expand or contract, and whose geometry may be open, flat, or closed.
In the 1910s, Vesto Slipher (and later Carl Wilhelm Wirtz ) interpreted the red shift of spiral nebulae as a Doppler shift that indicated they were receding from Earth. [ 12 ] [ 13 ] However, it is difficult to determine the distance to astronomical objects. One way is to compare the physical size of an object to its angular size , but a physical size must be assumed in order to do this. Another method is to measure the brightness of an object and assume an intrinsic luminosity , from which the distance may be determined using the inverse-square law . Due to the difficulty of using these methods, they did not realize that the nebulae were actually galaxies outside our own Milky Way , nor did they speculate about the cosmological implications. In 1927, the Belgian Roman Catholic priest Georges Lemaître independently derived the Friedmann–Lemaître–Robertson–Walker equations and proposed, on the basis of the recession of spiral nebulae, that the universe began with the "explosion" of a "primeval atom " [ 14 ] —which was later called the Big Bang. In 1929, Edwin Hubble provided an observational basis for Lemaître's theory. Hubble showed that the spiral nebulae were galaxies by determining their distances using measurements of the brightness of Cepheid variable stars. He discovered a relationship between the redshift of a galaxy and its distance. He interpreted this as evidence that the galaxies are receding from Earth in every direction at speeds proportional to their distance from Earth. [ 15 ] This fact is now known as Hubble's law , though the numerical factor Hubble found relating recessional velocity and distance was off by a factor of ten, due to not knowing about the types of Cepheid variables.
Given the cosmological principle, Hubble's law suggested that the universe was expanding. Two primary explanations were proposed for the expansion. One was Lemaître's Big Bang theory, advocated and developed by George Gamow . The other explanation was Fred Hoyle 's steady state model in which new matter is created as the galaxies move away from each other. In this model, the universe is roughly the same at any point in time. [ 16 ] [ 17 ]
For a number of years, support for these theories was evenly divided. However, the observational evidence began to support the idea that the universe evolved from a hot dense state. The discovery of the cosmic microwave background in 1965 lent strong support to the Big Bang model, [ 17 ] and since the precise measurements of the cosmic microwave background by the Cosmic Background Explorer in the early 1990s, few cosmologists have seriously proposed other theories of the origin and evolution of the cosmos. One consequence of this is that in standard general relativity, the universe began with a singularity , as demonstrated by Roger Penrose and Stephen Hawking in the 1960s. [ 18 ]
An alternative view to extend the Big Bang model, suggesting the universe had no beginning or singularity and the age of the universe is infinite, has been presented. [ 19 ] [ 20 ] [ 21 ]
In September 2023, astrophysicists questioned the overall current view of the universe , in the form of the Standard Model of Cosmology , based on the latest James Webb Space Telescope studies. [ 22 ]
The lightest chemical elements , primarily hydrogen and helium , were created during the Big Bang through the process of nucleosynthesis . [ 23 ] In a sequence of stellar nucleosynthesis reactions, smaller atomic nuclei are then combined into larger atomic nuclei, ultimately forming stable iron group elements such as iron and nickel , which have the highest nuclear binding energies . [ 24 ] The net process results in a later energy release , meaning subsequent to the Big Bang. [ 25 ] Such reactions of nuclear particles can lead to sudden energy releases from cataclysmic variable stars such as novae . Gravitational collapse of matter into black holes also powers the most energetic processes, generally seen in the nuclear regions of galaxies, forming quasars and active galaxies .
Cosmologists cannot explain all cosmic phenomena exactly, such as those related to the accelerating expansion of the universe , using conventional forms of energy . Instead, cosmologists propose a new form of energy called dark energy that permeates all space. [ 26 ] One hypothesis is that dark energy is just the vacuum energy , a component of empty space that is associated with the virtual particles that exist due to the uncertainty principle . [ 27 ]
There is no clear way to define the total energy in the universe using the most widely accepted theory of gravity, general relativity. Therefore, it remains controversial whether the total energy is conserved in an expanding universe. For instance, each photon that travels through intergalactic space loses energy due to the redshift effect. This energy is not transferred to any other system, so seems to be permanently lost. On the other hand, some cosmologists insist that energy is conserved in some sense; this follows the law of conservation of energy . [ 28 ]
Different forms of energy may dominate the cosmos— relativistic particles which are referred to as radiation , or non-relativistic particles referred to as matter. Relativistic particles are particles whose rest mass is zero or negligible compared to their kinetic energy , and so move at the speed of light or very close to it; non-relativistic particles have much higher rest mass than their energy and so move much slower than the speed of light.
As the universe expands, both matter and radiation become diluted. However, the energy densities of radiation and matter dilute at different rates. As a particular volume expands, mass-energy density is changed only by the increase in volume, but the energy density of radiation is changed both by the increase in volume and by the increase in the wavelength of the photons that make it up. Thus the energy of radiation becomes a smaller part of the universe's total energy than that of matter as it expands. The very early universe is said to have been 'radiation dominated' and radiation controlled the deceleration of expansion. Later, as the average energy per photon becomes roughly 10 eV and lower, matter dictates the rate of deceleration and the universe is said to be 'matter dominated'. The intermediate case is not treated well analytically . As the expansion of the universe continues, matter dilutes even further and the cosmological constant becomes dominant, leading to an acceleration in the universe's expansion.
The history of the universe is a central issue in cosmology. The history of the universe is divided into different periods called epochs, according to the dominant forces and processes in each period. The standard cosmological model is known as the Lambda-CDM model .
Within the standard cosmological model , the equations of motion governing the universe as a whole are derived from general relativity with a small, positive cosmological constant. [ 29 ] The solution is an expanding universe; due to this expansion, the radiation and matter in the universe cool and become diluted. At first, the expansion is slowed down by gravitation attracting the radiation and matter in the universe. However, as these become diluted, the cosmological constant becomes more dominant and the expansion of the universe starts to accelerate rather than decelerate. In our universe this happened billions of years ago. [ 30 ]
During the earliest moments of the universe, the average energy density was very high, making knowledge of particle physics critical to understanding this environment. Hence, scattering processes and decay of unstable elementary particles are important for cosmological models of this period.
As a rule of thumb, a scattering or a decay process is cosmologically important in a certain epoch if the time scale describing that process is smaller than, or comparable to, the time scale of the expansion of the universe. [ clarification needed ] The time scale that describes the expansion of the universe is 1 / H {\displaystyle 1/H} with H {\displaystyle H} being the Hubble parameter , which varies with time. The expansion timescale 1 / H {\displaystyle 1/H} is roughly equal to the age of the universe at each point in time.
Observations suggest that the universe began around 13.8 billion years ago. [ 31 ] Since then, the evolution of the universe has passed through three phases. The very early universe, which is still poorly understood, was the split second in which the universe was so hot that particles had energies higher than those currently accessible in particle accelerators on Earth. Therefore, while the basic features of this epoch have been worked out in the Big Bang theory, the details are largely based on educated guesses.
Following this, in the early universe, the evolution of the universe proceeded according to known high energy physics . This is when the first protons, electrons and neutrons formed, then nuclei and finally atoms. With the formation of neutral hydrogen, the cosmic microwave background was emitted. Finally, the epoch of structure formation began, when matter started to aggregate into the first stars and quasars , and ultimately galaxies, clusters of galaxies and superclusters formed. The future of the universe is not yet firmly known, but according to the ΛCDM model it will continue expanding forever.
Below, some of the most active areas of inquiry in cosmology are described, in roughly chronological order. This does not include all of the Big Bang cosmology, which is presented in Timeline of the Big Bang .
The early, hot universe appears to be well explained by the Big Bang from roughly 10 −33 seconds onwards, but there are several problems . One is that there is no compelling reason, using current particle physics, for the universe to be flat , homogeneous, and isotropic (see the cosmological principle ) . Moreover, grand unified theories of particle physics suggest that there should be magnetic monopoles in the universe, which have not been found. These problems are resolved by a brief period of cosmic inflation , which drives the universe to flatness , smooths out anisotropies and inhomogeneities to the observed level, and exponentially dilutes the monopoles. [ 32 ] The physical model behind cosmic inflation is extremely simple, but it has not yet been confirmed by particle physics, and there are difficult problems reconciling inflation and quantum field theory . [ vague ] Some cosmologists think that string theory and brane cosmology will provide an alternative to inflation. [ 33 ]
Another major problem in cosmology is what caused the universe to contain far more matter than antimatter . Cosmologists can observationally deduce that the universe is not split into regions of matter and antimatter. If it were, there would be X-rays and gamma rays produced as a result of annihilation , but this is not observed. Therefore, some process in the early universe must have created a small excess of matter over antimatter, and this (currently not understood) process is called baryogenesis . Three required conditions for baryogenesis were derived by Andrei Sakharov in 1967, and requires a violation of the particle physics symmetry , called CP-symmetry , between matter and antimatter. [ 34 ] However, particle accelerators measure too small a violation of CP-symmetry to account for the baryon asymmetry. Cosmologists and particle physicists look for additional violations of the CP-symmetry in the early universe that might account for the baryon asymmetry . [ 35 ]
Both the problems of baryogenesis and cosmic inflation are very closely related to particle physics, and their resolution might come from high energy theory and experiment , rather than through observations of the universe. [ speculation? ]
Big Bang nucleosynthesis is the theory of the formation of the elements in the early universe. It finished when the universe was about three minutes old and its temperature dropped below that at which nuclear fusion could occur. Big Bang nucleosynthesis had a brief period during which it could operate, so only the very lightest elements were produced. Starting from hydrogen ions ( protons ), it principally produced deuterium , helium-4 , and lithium . Other elements were produced in only trace abundances. The basic theory of nucleosynthesis was developed in 1948 by George Gamow, Ralph Asher Alpher , and Robert Herman . [ 36 ] It was used for many years as a probe of physics at the time of the Big Bang, as the theory of Big Bang nucleosynthesis connects the abundances of primordial light elements with the features of the early universe. [ 23 ] Specifically, it can be used to test the equivalence principle , [ 37 ] to probe dark matter , and test neutrino physics. [ 38 ] Some cosmologists have proposed that Big Bang nucleosynthesis suggests there is a fourth "sterile" species of neutrino. [ 39 ]
The ΛCDM ( Lambda cold dark matter ) or Lambda-CDM model is a parametrization of the Big Bang cosmological model in which the universe contains a cosmological constant, denoted by Lambda ( Greek Λ ), associated with dark energy, and cold dark matter (abbreviated CDM ). It is frequently referred to as the standard model of Big Bang cosmology. [ 40 ] [ 41 ]
The cosmic microwave background is radiation left over from decoupling after the epoch of recombination when neutral atoms first formed. At this point, radiation produced in the Big Bang stopped Thomson scattering from charged ions. The radiation, first observed in 1965 by Arno Penzias and Robert Woodrow Wilson , has a perfect thermal black-body spectrum. It has a temperature of 2.7 kelvins today and is isotropic to one part in 10 5 . Cosmological perturbation theory , which describes the evolution of slight inhomogeneities in the early universe, has allowed cosmologists to precisely calculate the angular power spectrum of the radiation, and it has been measured by the recent satellite experiments ( COBE and WMAP ) [ 42 ] and many ground and balloon-based experiments (such as Degree Angular Scale Interferometer , Cosmic Background Imager , and Boomerang ). [ 43 ] One of the goals of these efforts is to measure the basic parameters of the Lambda-CDM model with increasing accuracy, as well as to test the predictions of the Big Bang model and look for new physics. The results of measurements made by WMAP, for example, have placed limits on the neutrino masses. [ 44 ]
Newer experiments, such as QUIET and the Atacama Cosmology Telescope , are trying to measure the polarization of the cosmic microwave background. [ 45 ] These measurements are expected to provide further confirmation of the theory as well as information about cosmic inflation, and the so-called secondary anisotropies, [ 46 ] such as the Sunyaev-Zel'dovich effect and Sachs-Wolfe effect , which are caused by interaction between galaxies and clusters with the cosmic microwave background. [ 47 ] [ 48 ]
On 17 March 2014, astronomers of the BICEP2 Collaboration announced the apparent detection of B -mode polarization of the CMB, considered to be evidence of primordial gravitational waves that are predicted by the theory of inflation to occur during the earliest phase of the Big Bang. [ 9 ] [ 10 ] [ 11 ] [ 49 ] However, later that year the Planck collaboration provided a more accurate measurement of cosmic dust , concluding that the B-mode signal from dust is the same strength as that reported from BICEP2. [ 50 ] [ 51 ] On 30 January 2015, a joint analysis of BICEP2 and Planck data was published and the European Space Agency announced that the signal can be entirely attributed to interstellar dust in the Milky Way. [ 52 ]
Understanding the formation and evolution of the largest and earliest structures (i.e., quasars, galaxies, clusters and superclusters ) is one of the largest efforts in cosmology. Cosmologists study a model of hierarchical structure formation in which structures form from the bottom up, with smaller objects forming first, while the largest objects, such as superclusters, are still assembling. [ 53 ] One way to study structure in the universe is to survey the visible galaxies, in order to construct a three-dimensional picture of the galaxies in the universe and measure the matter power spectrum . This is the approach of the Sloan Digital Sky Survey and the 2dF Galaxy Redshift Survey . [ 54 ] [ 55 ]
Another tool for understanding structure formation is simulations, which cosmologists use to study the gravitational aggregation of matter in the universe, as it clusters into filaments , superclusters and voids . Most simulations contain only non-baryonic cold dark matter , which should suffice to understand the universe on the largest scales, as there is much more dark matter in the universe than visible, baryonic matter. More advanced simulations are starting to include baryons and study the formation of individual galaxies. Cosmologists study these simulations to see if they agree with the galaxy surveys, and to understand any discrepancy. [ 56 ]
Other, complementary observations to measure the distribution of matter in the distant universe and to probe reionization include:
These will help cosmologists settle the question of when and how structure formed in the universe.
Evidence from Big Bang nucleosynthesis , the cosmic microwave background , structure formation, and galaxy rotation curves suggests that about 23% of the mass of the universe consists of non-baryonic dark matter, whereas only 4% consists of visible, baryonic matter . The gravitational effects of dark matter are well understood, as it behaves like a cold, non-radiative fluid that forms haloes around galaxies. Dark matter has never been detected in the laboratory, and the particle physics nature of dark matter remains completely unknown. Without observational constraints, there are a number of candidates, such as a stable supersymmetric particle, a weakly interacting massive particle , a gravitationally-interacting massive particle, an axion , and a massive compact halo object . Alternatives to the dark matter hypothesis include a modification of gravity at small accelerations ( MOND ) or an effect from brane cosmology. TeVeS is a version of MOND that can explain gravitational lensing. [ 60 ]
If the universe is flat , there must be an additional component making up 73% (in addition to the 23% dark matter and 4% baryons) of the energy density of the universe. This is called dark energy. In order not to interfere with Big Bang nucleosynthesis and the cosmic microwave background, it must not cluster in haloes like baryons and dark matter. There is strong observational evidence for dark energy, as the total energy density of the universe is known through constraints on the flatness of the universe, but the amount of clustering matter is tightly measured, and is much less than this. The case for dark energy was strengthened in 1999, when measurements demonstrated that the expansion of the universe has begun to gradually accelerate. [ 61 ]
Apart from its density and its clustering properties, nothing is known about dark energy. Quantum field theory predicts a cosmological constant (CC) much like dark energy, but 120 orders of magnitude larger than that observed. [ 62 ] Steven Weinberg and a number of string theorists (see string landscape ) have invoked the 'weak anthropic principle ': i.e. the reason that physicists observe a universe with such a small cosmological constant is that no physicists (or any life) could exist in a universe with a larger cosmological constant. Many cosmologists find this an unsatisfying explanation: perhaps because while the weak anthropic principle is self-evident (given that living observers exist, there must be at least one universe with a cosmological constant (CC) which allows for life to exist) it does not attempt to explain the context of that universe. [ 63 ] For example, the weak anthropic principle alone does not distinguish between:
Other possible explanations for dark energy include quintessence [ 64 ] or a modification of gravity on the largest scales. [ 65 ] The effect on cosmology of the dark energy that these models describe is given by the dark energy's equation of state , which varies depending upon the theory. The nature of dark energy is one of the most challenging problems in cosmology.
A better understanding of dark energy is likely to solve the problem of the ultimate fate of the universe . In the current cosmological epoch, the accelerated expansion due to dark energy is preventing structures larger than superclusters from forming. It is not known whether the acceleration will continue indefinitely, perhaps even increasing until a big rip , or whether it will eventually reverse, lead to a Big Freeze , or follow some other scenario. [ 66 ]
Gravitational waves are ripples in the curvature of spacetime that propagate as waves at the speed of light, generated in certain gravitational interactions that propagate outward from their source. Gravitational-wave astronomy is an emerging branch of observational astronomy which aims to use gravitational waves to collect observational data about sources of detectable gravitational waves such as binary star systems composed of white dwarfs , neutron stars , and black holes ; and events such as supernovae , and the formation of the early universe shortly after the Big Bang. [ 67 ]
In 2016, the LIGO Scientific Collaboration and Virgo Collaboration teams announced that they had made the first observation of gravitational waves , originating from a pair of merging black holes using the Advanced LIGO detectors. [ 68 ] [ 69 ] [ 70 ] On 15 June 2016, a second detection of gravitational waves from coalescing black holes was announced. [ 71 ] Besides LIGO, many other gravitational-wave observatories (detectors) are under construction. [ 72 ]
Cosmologists also study: | https://en.wikipedia.org/wiki/Physical_cosmology |
Physical crystallography before X-rays describes how physical crystallography developed as a science up to the discovery of X-rays by Wilhelm Conrad Röntgen in 1895. In the period before X-rays, crystallography can be divided into three broad areas: geometric crystallography culminating in the discovery of the 230 space groups in 1891–4, chemical crystallography and physical crystallography.
Physical crystallography is concerned with the physical properties of crystals, such as their optical , electrical, and magnetic properties. The effect of electromagnetic radiation on crystals is covered in the following sections: double refraction , rotary polarization , conical refraction , absorption and pleochroism , luminescence, fluorescence and phosphorescence , reflection from opaque materials , and infrared optics . The effect of temperature change on crystals is covered in: thermal expansion , thermal conduction , thermoelectricity , and pyroelectricity . The effect of electricity and magnetism on crystals is covered in: electrical conduction , magnetic properties , and dielectric properties . The effect of mechanical force on crystals is covered in: photoelasticity , elastic properties , and piezoelectricity .
The study of crystals in the time before X-rays was focused more on their geometry and mathematical analysis than their physical properties. [ 1 ] Unlike geometrical crystallography, the history of physical crystallography has no central story, but is a collection of developments in different areas.
During the 19th century crystallography was progressively transformed into an empirical and mathematical science by the adoption of symmetry concepts. [ 2 ] In 1832 Franz Ernst Neumann used symmetry considerations when studying double refraction . [ 3 ] Woldemar Voigt , who was a student of Neumann, in 1885 formalized Neumann's principle as "if a crystal is invariant with respect to certain symmetry operations, any of its physical properties must also be invariant with respect to the same symmetry operations". [ 4 ] [ 5 ] Neumann's principle is sometimes referred to as the Neumann–Minnigerode–Curie principle based on later work by Bernhard Minnigerode [ 6 ] (another student of Neumann) and Pierre Curie . [ 7 ] Curie's principle "the symmetries of the causes are to be found in the effects" is a generalization of Neumann's principle. [ 8 ] At the end of the 19th century Voigt introduced tensor calculus to model the physical properties of anisotropic crystals. [ 9 ]
Double refraction occurs when a ray of light incident upon a birefringent material, is split by polarization into two rays taking slightly different paths. The double refraction and rhomboidal cleavage of crystals of calcite , or Iceland spar , were first recorded in 1669 by Rasmus Bartholin [ 10 ] In 1690 Christiaan Huygens analyzed double refraction in his book Traité de la lumière . [ 11 ] Huygens reasoned that the cleavage rhombohedron resulted from the stacking of spherical particles [ 12 ] and that the peculiarities of the transmission of light can be traced to the particular asymmetry of the crystal. [ 13 ]
In 1810 Étienne-Louis Malus determined that natural light, too, when reflected through a certain angle, behaves like one of the rays exiting a double-refracting crystal. [ 14 ] Malus called this phenomenon polarization . [ 15 ] In 1812 Jean-Baptiste Biot defined optically positive and negative crystals for the first time. [ 16 ] In 1819 David Brewster found that all crystals could be classified as isotropic, uniaxial or biaxial. [ 17 ] Augustin-Jean Fresnel was a significant researcher in the whole field of crystal optics , and published a detailed paper on double refraction in 1827 in which he described the phenomenon in terms of polarization, understanding light as a wave with field components in transverse polarization. [ 18 ] Crystal optics was an active research area during the 19th century [ 19 ] and comprehensive accounts of the field were published by Lazarus Fletcher (1891), [ 20 ] Theodor Liebisch (1891) [ 21 ] and Friedrich Pockels (1906). [ 22 ]
In 1824 Eilhard Mitscherlich observed that the angle between the cleavage faces of calcite changed with the temperature of the crystal. Mitscherlich concluded that, on heating, calcite contracts (has a negative coefficient of thermal expansion ) in a direction perpendicular to the trigonal axis while expanding (positive coefficient) along that axis. This implies that there is a cone of directions along which there is no thermal expansion. [ 23 ] In 1864 Hippolyte Fizeau used an optical interference method to make measurements on many crystals. [ 24 ] The measurements of the change of interfacial angle and the expansion of cut plates and bars were applied to crystals of all symmetries. [ 25 ]
Crystals with less than cubic symmetry are anisotropic and will generally have different expansion coefficients in different directions. If the crystal symmetry is monoclinic or triclinic, even the angles between the axes are subject to thermal changes. In these cases the coefficient of thermal expansion is a tensor . If the temperature T of a crystal is raised by an amount Δ T , a deformation takes place that is described by the strain tensor u ij = α ij Δ T . The quantities α ij are the coefficients of thermal expansion. Since u ij is a symmetrical polar tensor of second rank and T is a scalar, α ij is a symmetric tensor of second rank. [ 26 ] The contemporary usage of the term tensor was introduced by Woldemar Voigt in 1898. [ 27 ]
Joseph Fourier was an early researcher in thermal conduction , publishing Théorie analytique de la chaleur in 1822. [ 28 ] The first experiments on thermal conduction in crystals were carried out by Jean-Marie Duhamel in 1832. [ 29 ]
Henri Hureau de Sénarmont conducted experiments to determine if heat would move through crystals with directional dependence. [ 30 ] He found that, for non-cubic crystals, the isothermal envelope surrounding a point source of heat in a crystal plate had an elliptical shape whose exact form depended on the orientation of the crystal. [ 31 ] Sénarmont's results qualitatively established that thermal conductivity is directionally dependent ( thermal anisotropy ), with characteristic directions related to crystallographic axes. In 1848 Duhamel provided an analysis of Sénermont’s findings. [ 32 ]
George Gabriel Stokes and William Thomson provided mathematical theories to explain Sénarmont’s observations. [ 33 ] Stokes acknowledged the connection between the phenomena and the symmetry of the crystal, and showed that the number of constants of heat conductivity reduces from nine to six in the case of two planes of symmetry. [ 34 ] The matrix of thermal conductivity components resulting from Stoke's derivation constituted a tensor . [ 30 ] Experiments by de:Franz Stenger in 1884 [ 35 ] examined the theories put forward by Stokes and Thomson and disproved some of their theoretical speculations. [ 36 ]
Thomas Johann Seebeck discovered the thermoelectric effect in 1821, although it has been claimed that Alessandro Volta should be given the priority. [ 37 ] In 1844 de:Wilhelm Gottlieb Hankel investigated thermoelectricity in cobalt and iron sulfide crystals. Hankel showed that when certain external faces were developed the crystals were thermoelectrically positive relative to copper, whereas with other facial forms they were negative. [ 38 ] In 1850 Jöns Svanberg used bismuth and antimony crystals to demonstrate a directional variation of the thermoelectric effect. [ 39 ] In 1854 William Thomson put forward a mechanical theory of thermoelectric currents in crystalline solids. [ 40 ] In 1889 Theodor Liebisch analyzed the dependence of the thermoelectric force on the crystallographic direction in anisotropic crystals. [ 41 ]
The first observations on the variation of electrical conductivity with direction in a crystal ( anisotropy ) were made by Henri Hureau de Sénarmont in 1850 on 36 different substances. The results showed a correlation between the axes of symmetry and the directions of maximum or minimum conductivity. [ 42 ] In 1855 Carlo Matteucci performed experiments on bismuth. [ 43 ] In 1888, sv:Helge Bäckström performed electrical conduction measurements on hematite , another crystal of rhombohedral symmetry. [ 44 ]
Electrical conductivity in a crystal is now defined as a second rank symmetric tensor relating two vectors: J i = σ i j E j , {\displaystyle \mathbf {J} _{i}={\boldsymbol {\sigma }}_{ij}\mathbf {E} _{j},} where J i {\displaystyle \mathbf {J} _{i}} is the current density, σ i j {\displaystyle {\boldsymbol {\sigma }}_{ij}} is the electrical conductivity tensor, and E j {\displaystyle \mathbf {E} _{j}} is the electric field intensity. [ 45 ]
Until the 19th century crystals were regarded either as magnetic or nonmagnetic. Magnetic crystals are now called ferromagnetic to distinguish them from the several other kinds which have since been discovered. Siméon Denis Poisson (1826) put forward a theory of magnetism as applied to crystals and predicted the behaviour of crystals in a magnetic field [ 46 ] which was verified by Julius Plücker in 1847. Plücker studied various natural crystals, such as quartz and related the reaction of the crystal to a magnetic field to its symmetry. All these crystals were repelled from a strong field, unlike ferromagnetic crystals. They were therefore called diamagnetic . [ 47 ] In 1850 a number of investigations were carried out by Plücker and August Beer using torsion balances to measure the small forces involved in most observations. Not only were some crystals repelled from a strong field but others were slightly attracted. [ 48 ] These were called paramagnetic . Between 1850 and 1856 John Tyndall studied diamagnetism in crystals. [ 49 ]
By the end of the 19th century the three types of crystal, ferromagnetic, diamagnetic and paramagnetic, were well established and successful theories had related diamagnetic and paramagnetic crystals to their crystal symmetry. Ferromagnetic properties were dealt with by Pierre Weiss (1896) who explained the hysteresis by assuming that the atoms have permanent magnetic poles which are normally in random positions, but arrange themselves in parallel under the influence of a magnetic field. [ 50 ] On removing the field the mutual effect of the parallel dipoles tends to maintain the magnetized state. He further postulated that there were domains within which all the atomic dipoles were similarly orientated and that the N-S axis could be differently orientated in neighbouring domains. [ 51 ]
A dielectric is an electrical insulator that can be polarised by an applied electric field . In 1851 the first experiments on the behaviour of crystals in an electric field were carried out by Hermann Knoblauch in a manner similar to that used for the study of magnetic properties. [ 52 ] The conductivity of the crystals, both over the surface and through the body of the crystal, made these experiments unreliable. [ 53 ] In 1876 Elihu Root avoided some of these difficulties by employing a rapidly alternating field between parallel plates. [ 54 ] In 1893 Friedrich Pockels gave an account of the abnormally large piezoelectric constants of Rochelle salt . [ 55 ] A brief history on the theories of dielectrics in the 19th century has been written. [ 56 ]
In 1811 François Arago , who favoured the corpuscular theory of light, discovered the rotation of the plane of polarization of light travelling through quartz . [ 57 ] In 1812 Jean-Baptiste Biot , who favoured the wave theory of light, enunciated the laws of rotary polarization and their application to the analysis of various substances. [ 16 ] Biot discovered that while some crystals rotate the light to the right others rotate it to the left, and determined that the rotation is proportional to the thickness of substance traversed and to the wavelength of the light. [ 58 ]
In 1821 John Herschel pointed out the relation between the direction of rotation and the development of faces on quartz crystals. [ 59 ] Suspecting that rotatory polarization is an effect of a lack of symmetry, Herschel established that quartz crystals often present faces placed in such a way that those belonging to certain crystals are mirror images of the corresponding faces of other crystals. He explained the connection between this arrangement and the respective rotation of light to the right and to the left. [ 60 ] In 1822 Augustin-Jean Fresnel explained the rotation by postulating oppositely circularly polarized beams travelling with different velocities along the optic axis. [ 61 ] In 1831 George Biddell Airy gave an explanation of the formation of the spirals which bear his name. [ 62 ] In 1846 Michael Faraday discovered that the plane of polarization may also be rotated when light passes through an isotropic medium when it is in a magnetic field. [ 63 ] The corresponding Kerr effect can be observed on reflecting plane-polarized light from a polished ferromagnetic mirror when in a magnetized state.
In 1848 Louis Pasteur gave the general relation between crystal morphology and rotatory polarization. [ 64 ] Pasteur solved the mystery of polarized light acting differently with chemically identical crystals and solutions. Pasteur discovered the phenomenon of molecular asymmetry , that is that molecules could be chiral and exist as a pair of enantiomers . Pasteur's method was to physically separate the crystals of a racemic mixture of sodium ammonium tartrate into right- and left-handed crystals, and then dissolve them to make two separate solutions which rotated polarized light in opposite directions. [ 65 ]
In 1855 de:Christian August Hermann Marbach discovered that crystals of sodium chlorate, sodium bromate, sodium ammonium sulfate and sodium amyl acetate have the property of rotating the polarization plane. [ 66 ] In 1857 Alfred Des Cloizeaux advanced a general theory of rotatory polarization whilst studying cinnabar and strychnine sulphate. [ 67 ] In 1864 Josef Stefan introduced the banded spectrum in the study of rotatory polarization. [ 68 ] Theories of magnetic optics in ferromagnetic crystals were published in 1892 by D. A. Goldhammer, [ 69 ] and in 1893 by Paul Drude . [ 70 ] [ 71 ]
Conical refraction is an optical phenomenon in which a ray of light, passing through a biaxial crystal along certain directions, is refracted into a hollow cone of light. There are two possible conical refractions, one internal and one external.
In 1821-1822 Augustin-Jean Fresnel developed a theory of double refraction in both uniaxial and biaxial crystals. [ 72 ] Fresnel derived the equation for the wavevector surface in 1823, and André-Marie Ampère rederived it in 1828. [ 73 ] Many others investigated the wavevector surface of the biaxial crystal, but they all missed its physical implications.
William Rowan Hamilton , in his work on Hamiltonian optics , discovered the wavevector surface has four conoidal points and four tangent conics. [ 74 ] This implies that, under certain conditions, a ray of light could be refracted into a cone of light within the crystal. [ 75 ] He termed this phenomenon "conical refraction" and predicted two distinct types: internal and external, corresponding respectively to the conoidal points and tangent conics. Hamilton announced his discovery on 22 October 1832. He then asked Humphrey Lloyd to prove his theory experimentally. Lloyd first observed conical refraction on 14 December 1832 with a specimen of aragonite , and published his results in early 1833. [ 76 ] In 1833 James MacCullagh claimed that Hamilton's work was a special case of a theorem he had published in 1830. [ 77 ] Hamilton also exchanged letters with George Biddell Airy who was skeptical that conical refraction could be observed experimentally but became convinced after Lloyd's report. [ 78 ]
Hamilton and Lloyd's discovery was a significant victory for the wave theory of light and solidified Fresnel 's theory of double refraction. [ 79 ] The discovery of conical refraction is an example of a mathematical prediction being subsequently verified by experiment. [ 80 ]
Later theoretical work on conical refraction was published in 1860 by Robert Bellamy Clifton [ 81 ] and in 1874 by Jules Antoine Lissajous , [ 82 ] and experimental work in 1888 by Theodor Liebisch [ 83 ] and in 1889 by Albrecht Schrauf . [ 84 ] [ 71 ]
Photoelasticity describes changes in the optical properties of a material under mechanical deformation . The photoelastic phenomenon in transparent, non-crystalline materials (gels and glasses) was first discovered by David Brewster in 1815. [ 85 ] Brewster then detected the effect in crystals [ 86 ] and showed that uniaxial crystals could be made biaxial. [ 71 ] In 1822 Augustin-Jean Fresnel experimentally confirmed that the photoelastic effect was a stress-induced birefringence . [ 87 ]
Franz Ernst Neumann investigated double refraction in stressed transparent bodies. In 1841 Neumann published his elastic equations, which describe, in differential form, the changes which polarized light experiences when travelling through a stressed body. [ 88 ] The Neumann equations are the basis of all subsequent photoelasticity research. [ 89 ]
The photoelastic effect was analyzed by Friedrich Pockels , who also discovered the Pockels electro-optic effect , (the production of birefringence of light on the application of an electric field ). In 1889/90 Pockels produced a phenomenological theory for both of these effects for all crystal classes. [ 90 ]
In 1809 Louis Cordier discovered the phenomenon of pleochroism while investigating a new mineral that he named dichröıte. Dichröıte ( cordierite ) crystals showed different colors when viewed along different axes. [ 91 ] From 1817-1819 David Brewster made a systematic study of light absorption and pleochroism in various minerals and showed that, in uniaxial crystals, the absorption is smallest in the direction of, and greatest at right angles to, the optical axis. [ 92 ] In 1820 John Herschel studied the absorbtion of light in biaxial crystals and explained the interference rings first observed by David Brewster . [ 93 ] In 1838 Jacques Babinet discovered that the greatest absorption in a crystal generally coincided with the direction of greatest refractive index. [ 94 ] In 1845 Wilhelm Haidinger published a general account of pleochroism in crystals. [ 95 ] In 1854 Henri Hureau de Sénarmont showed that transparent crystals stained by a dye during crystal growth became pleochroic. [ 96 ] [ 97 ]
In 1877 de:Paul Glan performed photometric observations on absorption. [ 98 ] In 1880 de:Hugo Laspeyres pointed out the existence of absorption axes (directions of least, intermediate, and greatest absorption). He investigated certain biaxial crystals and found that the absorption axes, although subject to the symmetry of the crystal, did not necessarily coincide with the principal directions of the indicatrix . [ 99 ] In 1888 Henri Becquerel made qualitative and quantitative observations. [ 100 ] Woldemar Voigt (1885) and Paul Drude (1890) presented theories of the absorption of light in crystals. [ 101 ] In 1906 Friedrich Pockels published his Lehrbuch der Kristalloptik [ 22 ] which gave an overview of the subject. [ 71 ]
Luminescence is the non-thermal emission of visible light by a substance; an example is the emission of visible light by minerals in response to irradiation by ultraviolet light. The term luminescence was first used by Eilhard Wiedemann in 1888; [ 102 ] he stated that luminescence was separate from thermal radiation, and he distinguished six different forms of luminescence according to their excitation, [ 103 ] for example photoluminescence , electroluminescence , etc. [ 104 ]
Fluorescence is luminescence which occurs during the irradiation of a substance by electromagnetic radiation; fluorescent materials generally cease to glow nearly immediately when the radiation source stops. [ 105 ] The term fluorescence was coined by George Stokes in 1852, and was derived from the behavior of fluorite when exposed to ultraviolet light. [ 106 ]
Phosphorescence is long-lived luminescence; phosphorescent materials continue to emit light for some time after the radiation stops. In 1857 Edmond Becquerel invented the phosphoroscope , and in a detailed study of phosphorescence and fluorescence, showed that the duration of phosphorescence varies by substance, and that phosphorescence in solids is due to the presence of finely dispersed foreign substances. Becquerel suggested that fluorescence is simply phosphorescence of a very short duration. [ 107 ] The most prominent phosphorescent material for 130 years was ZnS doped with Cu + , or later Co 2+ , ions. The material was discovered in 1866 by Théodore Sidot who succeeded in growing tiny ZnS crystals by a sublimation method. [ 108 ]
Crystalloluminescence is the emission of light during crystal growth from solution. The first observation was that of potassium sulfate which was reported by a number of researchers in the eighteenth century; other substances reported in the early literature which exhibit crystalloluminescence include strontium nitrate, cobalt sulfate, potassium hydrogen sulfate, sodium sulfate, and arsenious acid. [ 109 ] In 1918 Harry Weiser summarised the research on crystalloluminescence up to that date. [ 110 ] Neither the spectral distribution nor the excitation mechanisms of crystalloluminescence are understood. [ 111 ]
Triboluminescence is the generation of light when certain materials, for example quartz , are rubbed; [ 112 ] fractoluminescence is the emission of light from the fracture of a crystal. The first recorded observation is attributed to Francis Bacon when he recorded in his 1620 Novum Organum that sugar sparkles when broken or scraped in the dark. [ 113 ] The scientist Robert Boyle also reported on some of his work on triboluminescence in 1664. [ 114 ]
In 1677 Henry Oldenburg described the luminescence of fluorite, CaF 2 , on heating. [ 115 ] In 1830 Thomas Pearsall observed that colourless fluorite could be coloured by discharging sparks from a Leyden jar held against it. [ 116 ] In 1881 luminescence excited by cathode rays was described by William Crookes . [ 117 ] In 1885 Edmond Becquerel found that when crystals were bombarded by cathode rays they became coloured and also emitted light. [ 118 ] In 1894 de:Eugen Goldstein showed that ultraviolet light has the same effect as cathode rays. [ 119 ] [ 120 ]
The study of the optical properties of opaque substances has been closely linked with the development of suitable microscopes. [ 121 ] The first instrument adapted to reflected light was the Lieberkühn reflector attributed to Johann Nathanael Lieberkühn . [ 122 ] The use of polished and etched surfaces for this type of study was introduced by Jöns Jacob Berzelius in 1813. [ 122 ] A theory of the light reflected from metals was put forward by Augustin-Louis Cauchy in 1848. [ 123 ] In 1858 Henry Clifton Sorby established the technique of cutting minerals and crystals into thin sections for examination under the polarizing microscope. [ 124 ] In 1864 Sorby studied the microscopical structure of minerals from meteorites . [ 125 ] In 1888 Paul Drude published work on reflection from antimony sulfide. [ 126 ]
Heinrich Rubens measured the dependence of the refractive index of quartz on wavelength, and found absorption in particular infrared wavelength ranges. By 1896 Rubens saw these bands as a potential filter that would allow him to separate out an almost monochromatic beam from the broad range of infrared radiation that his sources produced. [ 127 ] In 1897 Rubens and his student Ernest Fox Nichols studied the reststrahlen (residual rays) [ 128 ] obtained when infrared rays of appropriate wavelength are reflected from the surfaces of crystals. [ 129 ]
Pyroelectricity is the generation of a temporary voltage in a crystal when subjected to a temperature change. [ 130 ] The appearance of electrostatic charges upon a change of temperature has been observed since ancient times, in particular with tourmaline and was described, among others, by Steno , Linnaeus , Aepinus and René Just Haüy . Aepinus published an account of his observations in 1756. [ 131 ] Haüy made detailed investigations of pyroelectricity; [ 132 ] he detected pyroelectricity in calamine and showed that electricity in tourmaline was strongest at the poles of the crystal and became imperceptible at the middle. Haüy published a book on electricity and magnetism in 1787. [ 133 ] Haüy later showed that hemihedral crystals are electrified by temperature change while holohedral (symmetric) crystals are not.
Research into pyroelectricity became more quantitative in the 19th century. [ 134 ] In 1824 David Brewster gave the effect the name it has today. [ 135 ] In 1840 Gabriel Delafosse , Haüy's student, theorized that only molecules which are not symmetrical can be polarized electrically. [ 136 ] Both William Thomson in 1878 [ 137 ] and Woldemar Voigt in 1897 [ 138 ] helped develop a theory for the processes behind pyroelectricity.
A detailed history of pyroelectricity has been written by Sidney Lang; [ 139 ] shorter histories have also been published. [ 140 ]
Some minerals, for example mica , are highly elastic, springing back to their original shape after being bent. Others, for example talc , may be readily bent but do not return to their original form when released. The initial theory of the elasticity of solid bodies were developed in the 1820s. Augustin-Louis Cauchy and Siméon Denis Poisson published theories of the mutual action of a regular arrangement of particles for a non-cubic body in 1823 [ 141 ] and 1829 respectively. [ 142 ] In 1827 Claude-Louis Navier published a theory for an isotropic body. [ 143 ] Also during the 1820s Friedrich Mohs introduced his eponymous scale of hardness . [ 144 ] In 1834 Franz Ernst Neumann published a paper on the elasticity of homohedral crystals. [ 145 ]
In 1828 Cauchy generalised the problem and showed that 36 independent constants were required to describe elasticity in crystals. [ 146 ] George Green (1837) introduced the limitation that the force between any two elements of a crystal, however small, must lie along the line joining their centres. [ 147 ] This reduced the number of constants from 36 to 21. William Thomson (1857) showed that Green’s assumption was unnecessary and that the thermodynamic requirements of a reversible process require only 21 constants, without any special assumptions. [ 148 ] In 1874 Woldemar Voigt measured the elasticity of rock salt [ 149 ] and G. Baumgarten measured the elasticity of calcite . [ 150 ] In 1887 Wilhelm Röntgen and J. Schneider measured the cubic compressibility of sodium and potassium chlorides. [ 151 ] In 1877 Lambros Koromilas measured the elasticity of gypsum and mica by twisting mineral bars.; [ 152 ] in 1881 H. Klang carried out similar experiments with fluorites. [ 153 ]
In the period 1874-1888 Voigt was the leading researcher on the elasticity of crystals. Voigt showed that the number of elasticity constants reduces as more symmetry is introduced into the crystal. For a triclinc crystal, which is the most general case, 21 elasticity constants are required. For a monoclinic crystal there are 13 elasticity constants, for a rhombic crystal 9, for a hexagonal crystal 7, for a tetragonal crystal 6, and finally for a cubic crystal there are only 3. [ 154 ] A summary of developments in the field was published by W. A. Wooster. [ 155 ]
In 1880 Pierre and Jacques Curie discovered piezoelectricity (an electric charge that accumulates in response to applied mechanical stress) in certain crystals, including quartz , tourmaline , cane sugar and sodium chlorate . [ 156 ] [ 157 ] The Curies, however, did not predict the converse piezoelectric effect (the internal generation of a mechanical strain resulting from an applied electric field ). The converse effect was deduced by Gabriel Lippmann in 1881. [ 158 ] The Curies immediately confirmed the existence of the effect, [ 159 ] and went on to obtain quantitative proof of the complete reversibility of electro-elasto-mechanical deformations in piezoelectric crystals. [ 160 ]
In 1890 Woldemar Voigt published a phenomenological theory [ 161 ] of the piezoelectric effect based on the symmetry of crystals without centrosymmetry . [ 162 ]
Before the 20th century crystallography was not a well-established academic discipline. There were no academic positions specifically in crystallography. Workers in the field normally carried out their crystallographic research as an ancillary to other employment(s), or had independent means. The leading workers in the field of physical crystallography were employed as follows:
In the nineteenth century there were informal schools of physical crystallography researchers in France (Arago, E. Becquerel, Biot, Fresnel, Haüy, Sénarmont), [ 192 ] Germany (Drude, Groth, Liebisch, Mitscherlich, Mohs, Neumann, Pockels, Voigt) [ 193 ] and the British Isles (Airy, Brewster, Hamilton, Stokes, Thomson). [ 194 ]
Until the founding of Zeitschrift für Krystallographie und Mineralogie by Paul Groth in 1877 there was no lead journal for the publication of crystallographic papers. The majority of crystallographic research was published in the journals of national scientific societies, or in mineralogical journals. [ 195 ] The inauguration of Groth’s journal marked the emergence of crystallography as a mature science independent of geology. [ 196 ] | https://en.wikipedia.org/wiki/Physical_crystallography_before_X-rays |
Physical map is a technique used in molecular biology to find the order and physical distance between DNA base pairs by DNA markers . [ 1 ] It is one of the gene mapping techniques which can determine the sequence of DNA base pairs with high accuracy. Genetic mapping , another approach of gene mapping, can provide markers needed for the physical mapping. However, as the former deduces the relative gene position by recombination frequencies , it is less accurate than the latter.
Physical mapping uses DNA fragments and DNA markers to assemble larger DNA pieces. With the overlapping regions of the fragments, [ 2 ] researchers can deduce the positions of the DNA bases. There are different techniques to visualize the gene location, including somatic cell hybridization , radiation hybridization and in situ hybridization. [ 3 ]
The different approaches to physical mapping are available for analyzing different sizes of genome and achieving different levels of accuracy. Low- and high-resolution mapping are two classes for various resolution of genome, particularly for the investigation of chromosomes . [ 4 ] The three basic varieties of physical mapping are fluorescent in situ hybridization (FISH), restriction site mapping and sequencing by clones. [ 5 ]
The goal of physical mapping, as a common mechanism under genomic analysis, is to obtain a complete genome sequence in order to deduce any association between the target DNA sequence and phenotypic traits . [ 6 ] If the actual positions of genes which control certain phenotypes are known, it is possible to resolve genetic diseases by providing advice on prevention and developing new treatments. [ 5 ]
Low-resolution physical mapping is typically capable of resolving DNA ranging from one base pair to several mega bases. In this category, most mapping methods involve generating a somatic cell hybrid panel, which is able to map any human DNA sequences, the gene of interest [ clarification needed ] , to specific chromosomes of animal cells, such as those of mice and hamsters. [ 4 ] The hybrid cell panel is produced by collecting hybrid cell lines containing human chromosomes, identified by polymerase chain reaction (PCR) screening with primers specific to the human sequence of interest as the hybridization probe . The human chromosome would be presented [ clarification needed ] in all of the cell lines.
There are different approaches to producing low-resolution physical mapping, including chromosome-mediated gene transfer and irradiation fusion gene transfer which generate the hybrid cell panel. Chromosome-mediated gene transfer is a process that coprecipitates human chromosome fragments with calcium phosphate onto the cell line, leading to a stable transformation of recipient chromosomes retaining human chromosomes ranging in size from 1 to 50 mega base pairs. [ 4 ] Irradiation fusion gene transfer produces radiation hybrids which contain the human sequence of interest and a random set of other human chromosome fragments. Markers from fragments of human chromosome in radiation hybrids give cross-reactivity patterns, which are further analyzed to generate a radiation hybrid map by ordering the markers and breakpoints. [ 5 ] This provides evidence on whether the markers are located on the same human chromosome fragment, and hence the order of gene sequence.
High-resolution physical mapping could resolve hundreds of kilobases to a single nucleotide of DNA. [ 4 ] A major technique to map such large DNA regions is high resolution FISH mapping , which could be achieved by the hybridization of probes to extended interphase chromosomes or artificially extended chromatin . Since their hierarchic structure is less condensed comparing to prometaphase and metaphase chromosomes, the standard in situ hybridization target, a high resolution of physical mapping could be produced. [ 5 ]
FISH mapping using interphase chromosome is a conventional in situ method to map DNA sequences from 50 to 500 kilobases, which are mainly syntenic DNA clones. However, naturally extended chromosomes might be folded back and produces alternative physical map orders. As a result, statistical analysis is necessary to generate the accurate map order of interphase chromosomes. [ 4 ]
If artificially stretched chromatin is used instead, mapping resolutions could be over 700 kilobases. In order to produce extended chromosomes on a slide, direct visual hybridization (DIRVISH) is often carried out, that cells are lysed by detergent to allow DNA released into the solution to flow to the other end of the slide. An example of high resolution FISH mapping using stretched chromatin is extended chromatin fiber (ECF) FISH. The method suggests the order of desired regions on the DNA sequence by analyzing the partial overlaps and gaps between yeast artificial chromosomes (YACs). [ 4 ] Eventually, the linear sequence of the interested DNA regions could be determined. One more to note is that if metaphase chromosome is used in FISH mapping, the resolution resulted will be very poor, which is to be classified to low-resolution mapping rather than a high-resolution mapping. [ 5 ]
Restriction mapping is a top-down strategy that divides a chromosome target into finer regions. [ 7 ] Restriction enzymes are used to digest a chromosome and produce an ordered set of DNA fragments. It involves genomic fragments of the target rather than cloned fragments in the library. [ 8 ] They will be pinned to probes from the genomic library that are chosen randomly for detection purpose. The lengths of the fragments are measured by electrophoresis , which can be used to deduce their distance along the map according to the restriction site , the markers of a physical map. [ 8 ] The progress involves combinatorial algorithms. [ 9 ]
During the progress, a chromosome is obtained from a hybrid cell and cut at rare restriction site to produce large fragments. The fragments will be separated by size and undergo hybridization, forming the macrorestriction map and different contiguous blocks (i.e. contigs ). To ensure the fragments are linked, linking clones with the same rare cutting sites at the large fragments can be used.
After producing the low-resolution map, the fragments can be cut into smaller sections by restriction nucleases for further analysis to produce a map with higher resolution. PFG fractionation can be used for separation and purification of the fragments generated for small genome.
Through different digestion approaches, different types of DNA fragments are produced. The variation in types of fragments might affect the calculation result.
This technique uses two restriction enzymes and a combination of the two enzymes for digestion separately. [ 10 ] It assumes that complete digestion occurs at each restriction site. The lengths of the DNA fragments are measured and used for ordering of fragments by computation. This approach has easier experimental handling, but more difficult in terms of the combinatorial problem required for mapping.
This technique uses one restriction enzyme to digest the desired DNA in separated experiments with different durations of exposure. [ 10 ] The extent of digestion for the fragments differs. DNA methylation is a technique that prevents the reaction from being completed at cutting sites. This method must be done more carefully, but its mathematical problem can be easily solved by exponential algorithm .
Using clones to generate a physical map is a bottom-up approach with fairly high resolution. [ 8 ] It uses the existing cloned fragments in genomic libraries to form contigs. Through cloning the partially digested fragments generated by bacterial transformation, the immortal clones with overlapping regions of the genome, which will be examined by fingerprinting methods and stored in the libraries, are produced. [ 11 ] During sequencing process, the clones are randomly selected and placed on a set of microtitre plates randomly. They will be fingerprinted by different methods. To ensure there is a minimum set of clones that form one config for a genome (i.e. tiling path), the library used will have five to ten times redundancy. However, such techniques might produce unknown gaps in the map produced or result in saturation in clones eventually.
Physical mapping is a technique to complete the sequencing of a genome. Ongoing projects that determine DNA base pair sequences, namely the Human Genome Project , give knowledge on the order of nucleotide and allow further investigation to answer genetic questions, particularly the association between the target sequence and the development of traits. From the individual DNA sequence isolated and mapped in physical mapping, it could provide information on the transcription and translation process during development of organisms, hence identifying the specific function of the gene and associated traits produced. [ 6 ] As a result of understanding the expression and regulation of the genes, potential new treatments can be developed to alter protein expression patterns in specific tissues . Moreover, if the location and sequence of disease genes are identified, medical advice can be given to potential patients who are the carrier the disease gene, with reference to the knowledge of the gene function and products. [ 5 ] | https://en.wikipedia.org/wiki/Physical_mapping |
The subject of physical mathematics is concerned with mathematics that is motivated by physics and is considered by some as a subfield of mathematical physics .
Physically motivated mathematics existed within a tradition of mathematical analysis of nature that goes back to the ancient Greeks. A good example is Archimedes' Method of Mechanical Theorems , where the principle of the balance is used to find results in pure geometry . This tradition, elaborated further by Islamic and Byzantine scholars, was reintroduced to the West in the 12th century and during the Renaissance . It became known as "mixed mathematics" and was a major contributor to the emergence of modern mathematical physics in the 17th century . [ 1 ]
The details of physical units and their manipulation were addressed by Alexander Macfarlane in Physical Arithmetic in 1885. [ 2 ] The science of kinematics created a need for mathematical representation of motion and has found expression with complex numbers , quaternions , and linear algebra .
At the University of Cambridge the Mathematical Tripos tested students on their knowledge of "mixed mathematics". [ 3 ] "... [N]ew books which appeared in the mid-eighteenth century offered a systematic introduction to the fundamental operations of the fluxional calculus and showed how it could be applied to a wide range of mathematical and physical problems. ... The strongly problem-oriented presentation in the treatises ... made it much easier for university students to master the fluxional calculus and its applications [and] helped define a new field of mixed mathematical studies..."
An adventurous expression of physical mathematics is found in Maxwell 's A Treatise on Electricity and Magnetism , which used partial differential equations . The text aspired to describe phenomena in four dimensions, but the foundation for this physical world, Minkowski space , trailed by forty years.
String theorist Greg Moore said this about physical mathematics in his vision talk at Strings 2014. [ 4 ]
"The use of the term “Physical Mathematics” in contrast to the more traditional “ Mathematical Physics ” by myself and others is not meant to detract from the venerable subject of Mathematical Physics but rather to delineate a smaller subfield characterized by questions and goals that are often motivated, on the physics side, by quantum gravity , string theory , and supersymmetry , (and more recently by the notion of topological phases in condensed matter physics ), and, on the mathematics side, often involve deep relations to infinite-dimensional Lie algebras (and groups), topology , geometry , and even analytic number theory , in addition to the more traditional relations of physics to algebra, group theory , and analysis ."
This mathematical physics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Physical_mathematics |
Physical metallurgy is one of the two main branches of the scientific approach to metallurgy , which considers in a systematic way the physical properties of metals and alloys . It is basically the fundamentals and applications of the theory of phase transformations in metal and alloys. [ 1 ] While chemical metallurgy involves the domain of reduction/oxidation of metals, physical metallurgy deals mainly with mechanical and magnetic/electric/thermal properties of metals – as described by solid-state physics .
Timeline: [ 2 ] | https://en.wikipedia.org/wiki/Physical_metallurgy |
In natural language and physical science , a physical object or material object (or simply an object or body ) is a contiguous collection of matter , within a defined boundary (or surface ), that exists in space and time . Usually contrasted with abstract objects and mental objects . [ 1 ] [ 2 ]
Also in common usage, an object is not constrained to consist of the same collection of matter . Atoms or parts of an object may change over time. An object is usually meant to be defined by the simplest representation of the boundary consistent with the observations. However the laws of physics only apply directly to objects that consist of the same collection of matter.
In physics , an object is an identifiable collection of matter , which may be constrained by an identifiable boundary, and may move as a unit by translation or rotation, in 3-dimensional space .
Each object has a unique identity, independent of any other properties. Two objects may be identical, in all properties except position, but still remain distinguishable. In most cases the boundaries of two objects may not overlap at any point in time. The property of identity allows objects to be counted.
Examples of models of physical bodies include, but are not limited to a particle , several interacting smaller bodies ( particulate or otherwise). Discrete objects are in contrast to continuous media .
The common conception of physical objects includes that they have extension in the physical world , although there do exist theories of quantum physics and cosmology which arguably challenge [ how? ] this. In modern physics, "extension" is understood in terms of the spacetime : roughly speaking, it means that for a given moment of time the body has some location in the space (although not necessarily amounting to the abstraction of a point in space and time ). A physical body as a whole is assumed to have such quantitative properties as mass , momentum , electric charge , other conserved quantities , and possibly other quantities.
An object with known composition and described in an adequate physical theory is an example of physical system .
An object is known by the application of senses . The properties of an object are inferred by learning and reasoning based on the information perceived. Abstractly, an object is a construction of our mind consistent with the information provided by our senses, using Occam's razor .
In common usage an object is the material inside the boundary of an object, in three-dimensional space. The boundary of an object is a contiguous surface which may be used to determine what is inside, and what is outside an object. An object is a single piece of material, whose extent is determined by a description based on the properties of the material. An imaginary sphere of granite within a larger block of granite would not be considered an identifiable object, in common usage. A fossilized skull encased in a rock may be considered an object because it is possible to determine the extent of the skull based on the properties of the material.
For a rigid body , the boundary of an object may change over time by continuous translation and rotation . For a deformable body the boundary may also be continuously deformed over time in other ways.
An object has an identity . In general two objects with identical properties, other than position at an instance in time, may be distinguished as two objects and may not occupy the same space at the same time (excluding component objects). An object's identity may be tracked using the continuity of the change in its boundary over time. The identity of objects allows objects to be arranged in sets and counted .
The material in an object may change over time. For example, a rock may wear away or have pieces broken off it. The object will be regarded as the same object after the addition or removal of material, if the system may be more simply described with the continued existence of the object, than in any other way. The addition or removal of material may discontinuously change the boundary of the object. The continuation of the object's identity is then based on the description of the system by continued identity being simpler than without continued identity.
For example, a particular car might have all its wheels changed and still be regarded as the same car.
The identity of an object may not split. If an object is broken into two pieces at most one of the pieces has the same identity. An object's identity may also be destroyed if the simplest description of the system at a point in time changes from identifying the object to not identifying it. Also an object's identity is created at the first point in time that the simplest model of the system consistent with perception identifies it.
An object may be composed of components. A component is an object completely within the boundary of a containing object.
A living thing may be an object, and is distinguished from non-living things by the designation of the latter as inanimate objects . Inanimate objects generally lack the capacity or desire to undertake actions, although humans in some cultures may tend to attribute such characteristics to non-living things. [ 3 ]
In classical mechanics a physical body is a collection of matter having properties including mass , velocity , momentum and energy . The matter exists in a volume of three-dimensional space . This space is its extension .
Interactions between objects are partly described by orientation and external shape.
In continuum mechanics an object may be described as a collection of sub-objects, down to an infinitesimal division, which interact with each other by forces that may be described internally by pressure and mechanical stress .
In quantum mechanics an object is a particle or collection of particles. Until measured, a particle does not have a physical position. A particle is defined by a probability distribution of finding the particle at a particular position. There is a limit to the accuracy with which the position and velocity may be measured . A particle or collection of particles is described by a quantum state .
These ideas vary from the common usage understanding of what an object is.
In particle physics , there is a debate as to whether some elementary particles are not bodies, but are points without extension in physical space within spacetime , or are always extended in at least one dimension of space as in string theory or M theory .
In some branches of psychology , depending on school of thought , a physical object has physical properties , as compared to mental objects . In ( reductionistic ) behaviorism , objects and their properties are the (only) meaningful objects of study. While in modern-day behavioral psychotherapy, it is still only the means for goal-oriented behavior modifications, in Body Psychotherapy it is not a means only anymore, but its felt sense is a goal of its own. In cognitive psychology , physical bodies as they occur in biology are studied in order to understand the mind , which may not be a physical body, as in functionalist schools of thought.
A physical body is an enduring object that exists throughout a particular trajectory of space and orientation over a particular duration of time , and which is located in the world of physical space (i.e., as studied by physics ). This contrasts with abstract objects such as mathematical objects which do not exist at any particular time or place.
Examples are a cloud , a human body , a banana , a billiard ball, a table, or a proton . This is contrasted with abstract objects such as mental objects , which exist in the mental world , and mathematical objects . Other examples that are not physical bodies are emotions , the concept of " justice ", a feeling of hatred, or the number "3". In some philosophies, like the idealism of George Berkeley , a physical body is a mental object , but still has extension in the space of a visual field. | https://en.wikipedia.org/wiki/Physical_object |
Physical oncology (PO) is defined as the study of the role of mechanical signals in a cancerous tumor . The mechanical signals can be forces, pressures ("pull", "push" and "shear" designating the forces / pressures that push, pull or are tangential). If we generalize we will speak of " stress field " and " stress tensor ". [ 1 ] [ 2 ] [ 3 ]
A cancerous tumor (or "solid tumor" in the jargon of oncologists to differentiate them from hematological malignancies) is an organ consisting of two tissues: in the center the cancerous tumor proper and around the ExtraCellular Matrix (ECM), sometimes called stroma, chorion or connective tissue. The concept of connective tissue is interesting because it defines a tissue that travels the entire organism (except the brain) and is a preferred transmitter of mechanical signals. But for the cancer organ - isolated from this connective system - we prefer the term ECM.
The cancerous tissue is derived from a normal tissue of the body: breast cancer arises from a cancerous transformation of the normal mammary glandular tissue. It looks more or less like the original tissue: it is said that it is more or less differentiated; poorly differentiated it has a microscopic appearance that is far from normal tissue and is then "poorly prognostic", will make more metastases and will be more difficult to treat.
We are only considering cancers derived from "epithelia", that is to say the tissue that covers the organs in their interfaces with air, liquids ... or the outside world. Epithelial cells are contiguous and polarized. More than 90% of cancers (breast, prostate, colon / rectum, bronchi, pancreas, etc.) arise from these epithelia after a long process of cancerization.
ECM is a mixture of cells (immune, fibroblasts , etc.) dispersed in proteins, most of them collagen . It surrounds the tumor.
It is analogous to connective tissue and basal membrane , which is a local condensation, located below normal epithelia. This connective tissue allows oxygen and nutrients to diffuse to the epithelia, which are not vascularized.
In the tumor ECM, rapidly, beyond one mm3 of tumor is formed a network of blood vessels, the "neovascularization" (induced by " neoangiogenesis ") around the tumor and which will allow the diffusion of oxygen and nutrients in the cancer tissue itself, which is not vascularized. [ 4 ]
The cancerous tissue itself, derived from the cancerous transformation of an epithelium.
It's a multi-year process. The appearance of cancer is signified by the crossing of the basement membrane to the underlying connective tissue by one or more cancer cells.
Several teams, in the USA in particular, had maintained an expertise in the study of non-biological signals in oncology (Donald Ingber, Mina Bissell then Valerie Weaver, Rakesh J Jain among others). [ 5 ] [ 6 ] [ 7 ]
But the absolute dominance of genetics and molecular biology since the middle of the 20th century had marginalized this approach until its revival at the beginning of the 21st century. This renewal takes into account the immense gains of genetics and molecular biology in the mechanobiological approach. On the other hand, the PO validates results thanks to these achievements but does not use the concepts.
Biology / Mechanics comparison
Some differences between biological and physical signals
(20% of cell volume)
Compressible
Almost instantaneous
Bidirectional
The use of mechanical signals is therefore also the support of mechanobiology whose objective is very different from the PO. Indeed, as shown in the table above, the study of mechanotransduction , which is the support of mechanobiology, uses a mechanical "input" (signal input) but the signal collected at the output (the "output") is biological. As a result, many of the articles published in mechanobiology end with the phrase "we have defined a target to find a therapeutic molecule", which precludes any therapeutic approach by the mechanical signals themselves.
But this shift from the physical sciences to the biological sciences is problematic, in the absence of any bridge between these two sciences, one quantitative, physics based on mathematical language and the other qualitative, based on the laws of genetics and molecular biology.
OP aims to study the effect of a mechanical input on a mechanical output. We will see that this output can be synthesized in tissue architecture.
The diagnosis of cancer is made by looking under the microscope a fragment of the tumor (biopsy). The tissue phenotype - here cancerous tissue - is the sum of the cellular and tissue phenotype. The phenotype of the cell is supposed to be the translation of the genotype (and of the environment: epigenetics) expressed in a given cell: thus, a liver cell does not look like a pancreas cell at all because it does not express the same genes (yet all present in the genome of all cells).
These characteristics are summarized by: differentiation, cell division (mitosis), apoptosis (or "cell suicide") and cell death. The doctor in charge of the diagnosis under the microscope (the pathologist) will describe the biopsy based on these criteria. The tissue phenotype is centered on architecture: the normal tissue is Euclidean (hexagons, trapezes, circles ...) familiar to our brains; the cancerous is fractal, less familiar. It can be summed up in a coefficient of fractality very strongly correlated with the prognosis and the components of the cellular phenotype. Thus, a high coefficient of fractality is correlated with a poorly differentiated tumor, with many mitoses, little apoptosis and a poor prognosis.
And here we have to mention Mina Bissell: "in oncology the tissue phenotype is dominant over the cellular genotype". [ 6 ] [ 8 ]
OP was made possible by apparently minor technical changes that allowed in vitro and then in vivo models to be closer to the reality of the cancerous tumor in the patient.
For a very long time, two-dimensional (2D) cell cultures have been used in glass and then plastic boxes. The cultured cells thus adhered to the bottom, in very rigid material, rigidity measured by the Young's modulus, very high for these supports.
The Young modulus or modulus of elasticity is the constant that relates the tensile / compressive stress and the beginning of the deformation of an isotropic elastic material.
It is expressed in Pascal (Pa), unit of pressure.
Then appeared the three-dimensional (3D) cultures with cells which constituted multicellular spheres by dividing and were surrounded by a gel-like culture medium at the Young's modulus close to those of the living and variable tissues, for example depending on the amount of collagen surrounding these cultures in 3D. Organoids , spheroids are variants of this type of culture. [ 9 ]
At the same time the animal models evolved there also towards more similarity with the clinical reality. The human tumor xenograft is today the standard and the orthotopic transplant - for example human cancer of the pancreas in the mouse pancreas - is one of the best experimental models. [ 10 ]
The link between the clinic and the experiment becomes more realistic since these 3D cultures make it possible to use the culture medium surrounding the growing tumor tissue as a "virtual ECM", which can be varied, for example, to increase the pressure around the tumor grown.
Similarly, xenograft can constitute a cancer organ with both tissues even if the ECM is of animal origin.
What is hard in cancer
It's the ECM. So, when a doctor or a patient feels "a hard lump in the breast" and it is a cancer, what is hard is the ECM while the tumor itself is softer than the normal breast tissue. This has been demonstrated in vitro and more recently ex vivo and will soon be in vivo. [ citation needed ]
The role of stress on the growth of a spheroid in vitro had already been shown (G Helminger already cited), but the experiment of Matthew Paszek (last signatory of the article: Valerie Weaver) in 2005 will give a new dimension to this use of mechanical signals in vitro by showing the passage from a normal architecture of a breast acinus - the elementary unit of the mammary gland - to a cancerous architecture under the influence of a single variable, mechanical, here the surface tension caused by an increasing concentration of collagen in the culture medium surrounding the tumor.
We clearly see the transition from one architecture to another, progressive and reversible if the constraint is relaxed. Changes in the concentration of biological markers of cancerization (catenins , integrins , etc.), with the disappearance of the central cavity, highlight the shift in tissue phenotype.
In addition, this experiment opens the way to the reversibility of cancer, the royal way of treatment, which is intended to replace conventional destructive approaches.
Another experiment is equally spectacular:
According to Gautham Venugopalan ASCB 2012
Malignant breast cells cultured in vitro in 3D form a "disorganized" mass (translate fractal) on the left in the photo.
But after a few minutes of compression, they form an acinus, Euclidean, on the right.
Other authors have extended this work on different models with different mechanical signals. F Montel et al., in particular, has demonstrated on spheroids of human cancer origin the very significant increase in apoptosis in the response to stress.
These 3D cultures have also shown the organization of collagen fibers within the ECM and beyond, allowing remote transmission of mechanical signals and a 'tensor dialogue' between the tumor, ECM and normal environment. [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ]
But these experiments have in common to apply physical variables (surface tension, osmotic pressure ...) that cannot be used in vivo.
M Plodinec et al. extended this work using breast cancer biopsies kept alive ex vivo and then passed to an Atomic Force Microscope (AFM) to measure the Young's moduli of the different tissue components of these normal breast biopsies, tumors benign and malignant.
This team finds the results already widely explored on isolated cells and 2D cultures: the cancerous tissues have a Young's modulus around 0.8 kPa, normal tissues have a modulus around 1.1 kPa. The ECM has a module greater than 2 kPa.
This difference - the cancerous tissue is softer than its normal counterpart - crosses all oncology, all cancers combined and from the dysplastic cell to the tumor and metastatic cells. [ 16 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ]
All measurements, cell and tissue, converge towards the same conclusion: the modulus of the cancerous tissue is inversely correlated with the 'dangerousness' of the cancer: the softer the tumor, the more it is undifferentiated, the more it will give metastases, the less it will respond to current treatments...
In OP for therapeutic purposes, we find only the article by R Brossel et al. (2016) [ 21 ] which shows the possibility of acting on a tumor grafted subcutaneously in the rodent by applying a constraint.
It is validated in this Proof of Concept. There is a significant difference between the treated group and the control groups. This difference concerns the volume of the tumor measured in vivo which is very significantly decreased (p = 0.015) in the treated group compared to the 3 control groups (with particles and without gradient, with gradient and without particles, without gradient or particles). There is also a significant difference in favor of the treated group when measuring the surface of the living tumor, ex vivo, on digitized histological sections (p = 0.001).
Results
(mm 2 )
* Three groups of mice: with particles only; with gradient alone; without particles or gradient
This field imposed on the ECM is superimposed on that already present in the tumor tissue. Note the difference with in vitro: there is no confinement by the ECM in vitro or anchoring by the integrins which ensure the physical continuity between the ECM and the tumor tissue and thus allow the propagation to distance of mechanical signals.
In this perspective the "field of stress" becomes the therapeutic agent.
This stress is exerted via ferric nanoparticles, therefore magnetizable, located around the tumor (and not in the tumor) and subjected from outside the animal to a magnetic field gradient generated by fixed magnets. The nanoparticles then act as '‘BioActuators’ transforming part of the magnetic energy into mechanical energy.
To this work we can link the European project "Imaging Force of Cancer" which as its name indicates aims to measure, voxel by voxel, the constraints involved within the tumoral tissue. This program focuses on the breast, the primitive liver and the brain.
This project is based on MRI elastography, which is the reference method for in vivo, in situ and non-perturbative measurement of the strain, that is to say the very small elastic strain caused in the tissue will give access to the measure of "stress" that is to say of the constraint. It should therefore make it possible to construct the stress tensor of the tumor tissue in vivo, in situ, without significant disturbance intra-tumoral, obligatory starting base to hope to modify it. [ 22 ] [ 23 ] [ 24 ] [ 25 ]
There is also an in vivo experiment which demonstrates the increase of the signals coming from the integrins, induced by the increase of the rigidity of the matrix [ 26 ]
The cellular patterning allowed to show the dependence of the cellular architecture on the tensions generated by the support, variable according to the rigidity of these supports. This made it possible to hypothesize about the transmission of mechanical signals between the "outside", here the support (glass then plastic then gel), and the CytoSKeleton (CSK) and the nucleus.
The equilibrium, in the CSK of each cell, is between contractile microfilaments and microtubules resistant to compression; it is also done in the membership of the ECM by a game of pressure and tension that cancel out in a situation of equilibrium. Energy is given by actin.
Micropatterning has clearly shown these phenomena on the scale of a cell fixed on a support.
Increased stiffness of the ECM: the spreading of the cell - on the support, representation of the ECM - is necessary for the cell division (thus the growth).
Decrease of the rigidity of the ECM: when the ECM is deformed, the cell traction causes the stop of the growth and a differentiation of the cell or an apoptosis.
The soft material that transmits the mechanical signals is therefore pre-stressed and this allows the transmission of forces in the body with a quantitative distribution according to the scale: the skeleton, a macroscopic structure, will transmit much greater forces than an isolated organ.
On the scale of a tissue of an organ, the entire mechanical signal transmission network, including integrins, cadherins, focal adhesions (all intercellular junctions and ETM / cells), membrane, CSK, etc. also support the production of energy. Indeed, mitochondria are an integral part of this network and semi-solid (non-liquid) phase biochemistry is an important part of tissue metabolism.
Here we find a principle of treatment by the mechanical signals. [ 27 ] [ 28 ] [ 29 ]
Circulating Tumor Cells (CTCs) are isolable and their rigidity can be measured quite easily. Numerous articles have been able to verify what was already known for cells in 2D culture: the Young's modulus of CTC is very strongly correlated with the severity of cancer in all its parameters: differentiation, metastatic potential, prognostic and predictive correlation...
And these correlations are valid for metaplastic, dysplastic, in situ and cancerous cells. [ 17 ]
These CTCs must first cross the ECM, enter the bloodstream or lymphatic vessels, and then leave the circulation to attach to a tissue for metastasis. Many articles have recently commented on this "journey" and the many physical elements that punctuate it. [ 30 ] [ 31 ] [ 32 ]
The tumor accumulates mechanical energy during its growth. In an article by Stylianopoulos, the author uses a simple technique to highlight tumor constraints: an ex vivo tumor laser cutting frees the accumulated constraints. They are expressed as bulges that can be measured and related to the underlying stress. In the center of the tumor the radial and circumferential stresses are compressive; in the periphery of the tumor the radial stress is compressive, and the circumferential stress is a linear traction along the outer limit of the tumor. [ 33 ]
Tumor growth causes stress on the healthy tissues around it. [ 34 ]
The ExtraCellular Matrix (ECM) and the cells in contact with the ECM exert mutual tensions.
The cells of the tumor tissue exert tensions between themselves.
This results in a change in fluid flow in the tumor with an increase in intratumoral interstitial pressure.
The internal tension present in the excised tumor can be called "residual stress": when we cut it we clearly see an expansion of the volume which shows this residual tension. [ 33 ] [ 35 ]
Another track was opened by J Fredberg, in two dimensions:
As the intercellular adhesion stress increases, there is a histological architecture change and a solid to liquid phase transition.
The mechanical energy, cellular cohesion of the tumor tissue, is attributable, in large part, intercellular junctions and can be expressed in linear traction that has two components:
Popularized by Pierre Gilles de Gennes the term soft matter refers to the study of materials between solid and liquid; at ambient temperature, that of biology, thermal energy (kT) is of the same order of magnitude as the interaction energies between the various components. Due to this entropy / enthalpy balance these biological systems can be organized in a radically different way under the influence of small variations of the outside.
The physics of PO is a soft matter physics.
This is the generalization of the concept of constraint field. It summarizes in a mathematical expression all the pressures involved in a volume. Here, it is the volume of the tumor with a solid sphere, the tumor tissue, predominantly viscoelastic and a hollow sphere, the ECM, predominantly elastic. The solid sphere is embedded in the hollow sphere.
The mechanical signals travel through the organs, without any break in continuity. At the tissue level, it is the connective tissue or the ECM that ensures this continuity. At the cellular level, it is the continuity between the connective tissue, the cell membrane, the CSK and the nucleus that ensures this transmission. [ 36 ] [ 37 ]
A hitherto dominant approach is the "bottom up": the understanding of the biological mechanisms (mechanoreceptors, actin and other components of CSK, intracellular signaling, gene effectors, etc.) must lead to an understanding of the phenomena at scale, above be here mesoscopic, tissue. [ 38 ]
There are success stories of this approach when one can identify a faulty gene with a mutation and it is possible to act by a drug on the outcome of the mutation: a receptor or an enzyme.
This "one-to-one and first-degree equation" allowed Chronic Myeloid Leukemia to be controlled by imatinib. The defective BCR-ABL gene makes it possible to manufacture an abnormal version of an enzyme of the Tyrosine Kinase type present in leukemic cells. Imatinib inhibits this enzyme and the manufacture of these cells. [ 39 ] [ 40 ]
These few exceptions have led one to believe that this reasoning could be applied to cancers as a whole.
But the "equation" of cancer is much more complex. And the massive failure of "targeted therapies" to cure cancer is the illustration. These targeted therapies have cured only 50% of HER2 positive breast cancers treated with adjuvant therapy after local cancer treatment. That is 3% of breast cancers. That's all. Moreover, their participation in the "chronicization" of the breast and prostate - even some colon or rectum - is very minor compared to chemo / hormonotherapy, much better used today.
The other approach, "top down", takes into account the emergence of unpredictable phenomena through the reductionist approach. Thus, the experimental evidence showing that carcinogenesis is a process bound to emerge a break geometry of the tissue architecture requires abandoning the genetic level or above genetics to get into systems biology and put the matter to cell/tissue level.
In fact, cell phenotypes are emergent phenomena that result from intercellular nonlinear interactions and interaction with the environment that is to say the ECM. This is usually described in a phase space where attractors dot the landscape and are points of stability or instability. [ 41 ]
The cancer is fractal and this in all its components and at different scales micro/meso and macroscopic.
This geometry is recent and still little integrated in our mental representations. [ 42 ]
The first observation was the fractal nature of microcalcifications linked to breast cancer on a mammogram. [ 43 ]
Then the fractality of the cancer has been demonstrated on different structures of the organ cancer - neoangiogenesis, tumor growth zone, tumor tissue ... - and at the microscopic scale: cell nucleus, cell surface. [ 44 ] [ 45 ] [ 46 ]
A synergy between immunotherapy and the use of mechanical signals is highly likely as shown by two recent papers that describe the control of PDL-1 expression and immunocompetent cells by extracellular matrix stiffness. [ 47 ] [ 48 ]
Fractality is a means that evolution has found to minimize the energy used to distribute resources. Remember that cancer uses a source of energy different from other tissues, less efficient in yield. [ 49 ] [ 50 ] [ 51 ] [ 52 ] [ 53 ]
What is a cancer patient dying of?
There are several possibilities: the infectious complications related to the immunodepression due to the disease and the treatments, the attack of a vital organ like the lungs invaded by so many metastases that the breathing becomes impossible, the thrombotic complications like a pulmonary embolism, an end of life precipitated by analgesic treatments whose doses are increased. But behind all these causes is the diversion of energy by cancer that behaves like a parasite that kills its host. In some particularly local cancers, such as pancreatic cancer, this is particularly noticeable: the patient dies of cachexia, that is to say of great malnutrition.
The brilliant intuitions of D'Arcy Thompson are now accepted by all: the shape that organs (including cancer) and organisms take depends on the variations in time and space of the mechanical properties of the tissues. But he describes, without making any assumptions about why and how.
J Wolff described the histological variations of the bone according to the load which weighs on him.
This is well known to thoracic surgeons: a vein removed to bypass a coronary artery and grafted into the artery position changes histology and becomes an artery due to this new pressure regime. [ 54 ] [ 55 ] [ 56 ]
The same conclusion can be drawn from studies on the transformation of bone and cartilage tissue under different pressure regimes.
Since the 1950s, the genetic paradigm has emerged. Cancers arise from one (or some) mutated cell(s) and progression results from the sequential accumulation of tumor-free random mutations of all homeostatic controls.
The discovery of oncogenes, suppressor genes, stability genes (caretaker) is a coherent and reliable set to track the birth and progression of cancer.
But the contradictory experimental facts are not lacking: the carcinogens are not all mutagens (hormones...); the target of carcinogens may be the ECM and not the cell; an ECM exposed to a carcinogen brought into contact with a non-cancerous tissue will cause cancer of this tissue, but not vice versa; a cancerous tissue in close contact with a normal ECM may become normal tissue again. [ 57 ] [ 58 ]
Other authors have shown that it is possible to return to a normal architecture a cancerous tissue when it was taken in charge by an embryonic environment then by somatic tissue. [ 59 ] [ 60 ] [ 61 ] [ 62 ]
These last examples plead for the reality of the possible reversion of the cancerous to the non-cancerous.
Finally, more cancers are due to infectious "causes" than to genetic "causes".
These last examples plead for the reality of the possible reversion of the cancerous to the non-cancerous.
Any theory of carcinogenesis must explain cancerization since its onset, dysplasia, in situ, then crossing of the basement membrane, the growth of the primary tumor and the appearance of metastases.
Let us quote DW Smithers (1962): "cancer is no more a disease of the cells than a traffic jam is a disease of cars".
We therefore see a global approach taking into account both the mechanical and biological signals in this long process that goes from dysplasia to metastases.
This new branch of biology has consequences beyond oncology, in embryology, tissue engineering, etc. [ 63 ] [ 64 ] [ 65 ] [ 66 ]
It is only time for Physical Oncology to become visible. Visible because now integrable into an imagery that can measure mechanical signals, and visible in the scientific field as a full component of carcinogenesis. | https://en.wikipedia.org/wiki/Physical_oncology |
In physics , physical optics , or wave optics , is the branch of optics that studies interference , diffraction , polarization , and other phenomena for which the ray approximation of geometric optics is not valid. This usage tends not to include effects such as quantum noise in optical communication , which is studied in the sub-branch of coherence theory .
Physical optics is also the name of an approximation commonly used in optics, electrical engineering and applied physics . In this context, it is an intermediate method between geometric optics , which ignores wave effects, and full wave electromagnetism , which is a precise theory . The word "physical" means that it is more physical than geometric or ray optics and not that it is an exact physical theory. [ 1 ] : 11–13
This approximation consists of using ray optics to estimate the field on a surface and then integrating that field over the surface to calculate the transmitted or scattered field. This resembles the Born approximation , in that the details of the problem are treated as a perturbation .
In optics, it is a standard way of estimating diffraction effects. In radio , this approximation is used to estimate some effects that resemble optical effects. It models several interference, diffraction and polarization effects but not the dependence of diffraction on polarization. Since this is a high-frequency approximation, it is often more accurate in optics than for radio.
In optics, it typically consists of integrating ray-estimated field over a lens, mirror or aperture to calculate the transmitted or scattered field.
In radar scattering it usually means taking the current that would be found on a tangent plane of similar material as the current at each point on the front, i. e. the geometrically illuminated part, of a scatterer . Current on the shadowed parts is taken as zero. The approximate scattered field is then obtained by an integral over these approximate currents. This is useful for bodies with large smooth convex shapes and for lossy (low-reflection) surfaces.
The ray-optics field or current is generally not accurate near edges or shadow boundaries, unless supplemented by diffraction and creeping wave calculations.
The standard theory of physical optics has some defects in the evaluation of scattered fields, leading to decreased accuracy away from the specular direction. [ 2 ] [ 3 ] An improved theory introduced in 2004 gives exact solutions to problems involving wave diffraction by conducting scatterers. [ 2 ] | https://en.wikipedia.org/wiki/Physical_optics |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.