text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
The photoacoustic effect or optoacoustic effect is the formation of sound waves following light absorption in a material sample. In order to obtain this effect the light intensity must vary, either periodically ( modulated light ) or as a single flash ( pulsed light ). [ 1 ] [ page needed ] [ 2 ] The photoacoustic effect is quantified by measuring the formed sound (pressure changes) with appropriate detectors, such as microphones or piezoelectric sensors . The time variation of the electric output (current or voltage) from these detectors is the photoacoustic signal. These measurements are useful to determine certain properties of the studied sample. For example, in photoacoustic spectroscopy , the photoacoustic signal is used to obtain the actual absorption of light in either opaque or transparent objects. It is useful for substances in extremely low concentrations, because very strong pulses of light from a laser can be used to increase sensitivity and very narrow wavelengths can be used for specificity. Furthermore, photoacoustic measurements serve as a valuable research tool in the study of the heat evolved in photochemical reactions (see: photochemistry ), particularly in the study of photosynthesis .
Most generally, electromagnetic radiation of any kind can give rise to a photoacoustic effect. This includes the whole range of electromagnetic frequencies, from gamma radiation and X-rays to microwave and radio . Still, much of the reported research and applications, utilizing the photoacoustic effect, is concerned with the near ultraviolet / visible and infrared spectral regions.
The discovery of the photoacoustic effect dates back to 1880, when Alexander Graham Bell was experimenting with long-distance sound transmission. Through his invention, called " photophone ", he transmitted vocal signals by reflecting sun-light from a moving mirror to a selenium solar cell receiver. [ 3 ] As a byproduct of this investigation, he observed that sound waves were produced directly from a solid sample when exposed to beam of sunlight that was rapidly interrupted with a rotating slotted wheel. [ 4 ] He noticed that the resulting acoustic signal was dependent on the type of the material and correctly reasoned that the effect was caused by the absorbed light energy , which subsequently heats the sample. Later Bell showed that materials exposed to the non-visible (ultra-violet and infra-red) portions of the solar spectrum can also produce sounds and invented a device, which he called "spectrophone", to apply this effect for spectral identification of materials. [ 5 ] Bell himself and later John Tyndall and Wilhelm Röntgen extended these experiments, demonstrating the same effect in liquids and gases. [ 6 ] [ 7 ] However, the results were too crude, dependent on ear detection, and this technique was soon abandoned. The application of the photoacoustic effect had to wait until the development of sensitive sensors and intense light sources. In 1938 Mark Leonidovitch Veingerov revived the interest in the photoacoustic effect, being able to use it in order to measure very small carbon dioxide concentration in nitrogen gas (as low as 0.2% in volume). [ 8 ] Since then research and applications grew faster and wider, acquiring several fold more detection sensitivity.
While the heating effect of the absorbed radiation was considered to be the prime cause of the photoacoustic effect, it was shown in 1978 that gas evolution resulting from a photochemical reaction can also cause a photoacoustic effect. [ 9 ] Independently, considering the apparent anomalous behaviour of the photoacoustic signal from a plant leaf, which could not be explained solely by the heating effect of the exciting light, led to the cognition that photosynthetic oxygen evolution is normally a major contributor to the photoacoustic signal in this case. [ 10 ]
Although much of the literature on the subject is concerned with just one mechanism, there are actually several different mechanisms that produce the photoacoustic effect. The primary universal mechanism is photothermal , based on the heating effect of the light and the consequent expansion of the light-absorbing material. In detail, the photothermal mechanism consists of the following stages:
The main physical picture, in this case, envisions the original temperature pulsations as origins of propagating temperature waves ("thermal waves"), [ 11 ] which travel in the condensed phase, ultimately reaching the surrounding gaseous phase. The resulting temperature pulsations in the gaseous phase are the prime cause of the pressure changes there. The amplitude of the traveling thermal wave decreases strongly (exponentially) along its propagation direction, but if its propagation distance in the condensed phase is not too long, its amplitude near the gaseous phase is sufficient to create detectable pressure changes. [ 1 ] [ page needed ] [ 2 ] [ 12 ] This property of the thermal wave confers unique features to the detection of light absorption by the photoacoustic method. The temperature and pressure changes involved are minute, compared to everyday scale – typical order of magnitude for the temperature changes, using ordinary light intensities, is about micro- to millidegrees and for the resulting pressure changes is about nano- to microbars.
The photothermal mechanism manifests itself, besides the photoacoustic effect, also by other physical changes, notably emission of infra-red radiation and changes in the refraction index . Correspondingly, it may be detected by various other means, described by terms such as "photothermal radiometry", [ 13 ] "thermal lens" [ 14 ] and "thermal beam deflection" (popularly also known as " mirage " effect, see Photothermal spectroscopy ). These methods parallel the photoacoustic detection. However, each method has its special range of application.
While the photothermal mechanism is universal, there could exist additional other mechanisms, superimposed on the photothermal mechanism, which may contribute significantly to the photoacoustic signal. These mechanisms are generally related to photophysical processes and photochemical reactions following light absorption: (1) change in the material balance of the sample or the gaseous phase around the sample; [ 9 ] (2) change in the molecular organization, which results in molecular volume changes. [ 15 ] [ 16 ] Most prominent examples for these two kinds of mechanisms are in photosynthesis. [ 10 ] [ 15 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ]
The first mechanism above is mostly conspicuous in a photosynthesizing plant leaf . There, the light induced oxygen evolution causes pressure changes in the air phase, resulting in a photoacoustic signal, which is comparable in magnitude to that caused by the photothermal mechanism. [ 10 ] [ 18 ] This mechanism was tentatively named "photobaric". The second mechanism shows up in photosynthetically active sub-cell complexes in suspension (e.g. photosynthetic reaction centers ). There, the electric field which is formed in the reaction center, following the light induced electron transfer process, causes a micro electrostriction effect with a change in the molecular volume. This, in turn, induces a pressure wave which propagates in the macroscopic medium. [ 15 ] [ 20 ] Another case for this mechanism is Bacteriorhodopsin proton pump . Here the light induced change in the molecular volume is caused by conformational changes that occur in this protein following light absorption. [ 15 ] [ 21 ]
In applying the photoacoustic effect there exist various modes of measurement. Gaseous samples or condensed phase samples, where the pressure is measured in the surrounding gaseous phase, are usually probed with a microphone. The useful applicable time-scale in this case is in the millisecond to sub-second scale. Most often, In this case, the exciting light is continuously chopped or modulated at a certain frequency (mostly in the range between ca. 10–10000 Hz) and the modulated photoacoustic signal is analyzed with a lock-in amplifier for its amplitude and phase, or for the inphase and quadrature components. When the pressure is measured within the condensed phase of the probed specimen, one utilizes piezoelectric sensors inserted into or coupled to the specimen itself. In this case the time scale is between less than nanoseconds to many microseconds [ 1 ] [ page needed ] [ 2 ] [ 22 ] [ 23 ] The photoacoustic signal, obtained from the various pressure sensors, depends on the physical properties of the system, the mechanism that creates the photoacoustic signal, the light-absorbing material, the dynamics of the excited state relaxation and the modulation frequency or the pulse profile of the radiation, as well as the sensor properties. This calls for appropriate procedures to (i) separate between the signals due to different mechanisms and (ii) to obtain the time dependence of the heat evolution (in the case of the photothermal mechanism) or the oxygen evolution (in the case of the photobaric mechanism in photosynthesis) or the time dependence of the volume changes, from the time dependence of the resulting photoacoustic signal. [ 1 ] [ page needed ] [ 2 ] [ 12 ] [ 22 ] [ 23 ]
Considering the photothermal mechanism alone, the photoacoustic signal is useful in measuring the light absorption spectrum , particularly for transparent samples where the light absorption is very small. In this case the ordinary method of absorption spectroscopy , based on difference of the intensities of a light beam before and after its passage through the sample, is not practical. In photoacoustic spectroscopy there is no such limitation. the signal is directly related to the light absorption and the light intensity. Dividing the signal spectrum by the light intensity spectrum can give a relative percent absorption spectrum, which can be calibrated to yield absolute values. This is very useful to detect very small concentrations of various materials. [ 24 ] Photoacoustic spectroscopy is also useful for the opposite case of opaque samples, where the absorption is essentially complete. In an arrangement where a sensor is placed in a gaseous phase above the sample and the light impinges the sample from above, the photoacoustic signal results from an absorption zone close to the surface. A typical parameter which governs the signal in this case is the "thermal diffusion length", which depends on the material and the modulation frequency and ordinarily is in the order of several micrometers . [ 1 ] [ page needed ] [ 12 ] The signal is related to the light absorbed in the small distance of the thermal diffusion length, allowing the determination of the absorption spectrum. [ 1 ] [ page needed ] [ 12 ] [ 25 ] This allows also to separately analyze a surface that is distinct from the bulk. [ 26 ] [ 27 ] By varying the modulation frequency and wavelength of the probing radiation one essentially varies the probed depth, which results in the possibility of depth profiling [ 27 ] and photoacoustic imaging , which discloses inhomogeneities within the sample. This analysis includes also the possibility to determine the thermal properties from the photoacoustic signal. [ 1 ] [ page needed ]
Recently, the photoacoustic approach has been utilized to quantitatively measure macromolecules, such as proteins. The photoacoustic immunoassay labels and detects target proteins using nanoparticles that can generate strong acoustic signals. [ 28 ] The photoacoustics-based protein analysis has also been applied for point-of-care testings. [ 29 ]
Another application of the photoacoustic effect is its ability to estimate the chemical energies stored in various steps of a photochemical reaction. Following light absorption photophysical and photochemical conversions occur, which store part of the light energy as chemical energy. Energy storage leads to less heat evolution. The resulting smaller photoacoustic signal thus gives a quantitative estimate of the extent of the energy storage. For transient species this requires the measurement of the signal in the relevant time scale and the capability to extract from the temporal part of the signal the time-dependent heat evolution, by proper deconvolution. [ 19 ] [ 22 ] [ 23 ] There are numerous examples for this application. [ 30 ] A similar application is the study of the conversion of light energy to electrical energy in solar cells. [ 31 ] A special example is the application of the photoacoustic effect in photosynthesis research.
Photosynthesis is a very suitable platform to be investigated by the photoacoustic effect, providing many examples to its various uses. As noted above, the photoacoustic signal from wet photosynthesizing specimens (e.g. microalgae in suspension, sea weed ) is principally photothermal. The photoacoustic signal from spongy structures (leaves, lichens ) is a combination of photothermal and photobaric (gas evolution or uptake) contributions. The photoacoustic signal from preparations which carry out the primary electron transfer reactions (e.g. reaction centers ) is a combination of photothermal and molecular volume changes contributions. In each case, respectively, photoacoustic measurements provided information on
These measurements provided information related to the mechanism of photosynthesis, as well as give indications on the intactness and health of the specimen.
Examples are: (a) the energetics of the primary electron transfer processes, obtained from the energy storage and molecular volume change measured under sub-microsecond flashes; (b) The characteristics of the 4-step oxidation cycle in photosystem II , [ 19 ] obtained for leaves by monitoring photoacoustic pulsed signals and their oscillatory behavior under repetitive exciting light flashes; (c) the characteristics of photosystem I and photosystem II of photosynthesis ( absorption spectrum , light distribution to the two photosystems) and their interactions. This is obtained by using continuously modulated light of a certain specific wavelength to excite the photoacoustic signal and measure changes in energy storage and oxygen evolution caused by background light at various chosen wavelengths.
In general, photoacoustic measurements of energy storage require a reference sample for comparison. It is a sample with exactly the same light absorption (at the given excitation wavelength) but which completely degrades all the absorbed light into heat within the time resolution of the measurement. It is lucky that photosynthetic systems are self-calibrating, providing such a reference in one sample, as follows: One compares two signals: one, which is obtained with the probing modulated/pulsed light alone and the other when a steady non-modulated light (referred to as background light ), which is strong enough to drive photosynthesis into saturation, is added. [ 32 ] [ 33 ] [ 34 ] The added steady light does not produce any photoacoustic effect by itself, but changes the photoacoustic response due to the modulated/pulsed probing light. The resulting signal serves as a reference to all other measurements in absence of the background light. The photothermal part of the reference signal is maximal, since at photosynthetic saturation no energy is stored. At the same time the contribution of the other mechanisms tends to zero at saturation. Thus the reference signal is proportional to the total absorbed light energy.
In order to separate and define the photobaric and photothermal contributions in spongy samples (leaves, lichens) one uses the following properties of the photoacoustic signal: (1) At low frequencies (below roughly 100 Hz) the photobaric part of the photoacoustic signal may be quite large and the total signal decreases under the background light. The photobaric signal is obtained in principle from the difference of signals (the total signal minus the reference signal, after a correction to account for the energy storage). (2) At sufficiently high frequencies, however, the photobaric signal is very much attenuated in comparison with the photothermal component and can be neglected. Also, no photobaric signal can be observed even at low frequencies in a leaf with its inner air space filled with water. This is true also in live algal thalli, suspensions of microalgae and photosynthetic bacteria. This is because the photobaric signal depends on oxygen diffusion from the photosynthetic membranes to the air phase, and is largely attenuated as the diffusion distance in the aqueous medium increases. In all the above instances when no photobaric signal is observed one may determine the energy storage by comparing the photoacoustic signal obtained with the probing light alone, to the reference signal.
The parameters obtained from the above measurements are used in a variety of ways. Energy storage and the intensity of the photobaric signal are related to the efficiency of photosynthesis and can be used to monitor and follow the health of photosynthesizing organisms. They are also used to obtain mechanistic insight on the photosynthetic process: light of different wavelengths allows one to obtain the efficiency spectrum of photosynthesis, the light distribution between the two photosystems of photosynthesis and to identify different taxa of phytoplankton. [ 35 ] The use of pulsed lasers gives thermodynamic and kinetic information on the primary electron transfer steps of photosynthesis. | https://en.wikipedia.org/wiki/Photoacoustic_effect |
Photoacoustic microscopy is an imaging method based on the photoacoustic effect and is a subset of photoacoustic tomography . Photoacoustic microscopy takes advantage of the local temperature rise that occurs as a result of light absorption in tissue. Using a nanosecond pulsed laser beam, tissues undergo thermoelastic expansion, resulting in the release of a wide-band acoustic wave that can be detected using a high-frequency ultrasound transducer. [ 1 ] Since ultrasonic scattering in tissue is weaker than optical scattering, photoacoustic microscopy is capable of achieving high-resolution images at greater depths than conventional microscopy methods. Furthermore, photoacoustic microscopy is especially useful in the field of biomedical imaging due to its scalability. By adjusting the optical and acoustic foci, lateral resolution may be optimized for the desired imaging depth. [ 2 ]
The goal of photoacoustic microscopy is to find the local pressure rise p 0 {\displaystyle p_{0}} , which can be used to calculate the absorption coefficient μ a {\displaystyle \mu _{a}} according to the formula:
p 0 = Γ η t h μ a F , {\displaystyle p_{0}=\Gamma \eta _{th}\mu _{a}F,}
where η t h {\displaystyle \eta _{th}} is the percentage of light converted to heat, F {\displaystyle F} is the local optical fluence (J/cm 2 ), and the dimensionless Gruneisen parameter Γ {\displaystyle \Gamma } is defined as:
Γ = β κ ρ C V , {\displaystyle \Gamma ={\frac {\beta }{\kappa \rho C_{V}}},}
where β {\displaystyle \beta } is the thermal coefficient of volume expansion (K −1 ), κ {\displaystyle \kappa } is the isothermal compressibility (Pa −1 ), and ρ {\displaystyle \rho } is the density (kg/m 3 ). [ 3 ]
Following the initial pressure rise, a photoacoustic wave propagates at the speed of sound within the medium and can be detected with an ultrasound transducer.
One of the major benefits of photoacoustic microscopy is the simplicity of image reconstruction. A laser pulse excites tissue in the axial direction and the resulting photoacoustic waves are detected by an ultrasound transducer . The transducer then converts the mechanical energy into a voltage signal that can be read by an analog-to-digital converter for post-processing. A one-dimensional image, known as an A-line, is formed as a result of each laser pulse. Hilbert transform of an A-line reveals depth-encoded information. A 3D photoacoustic image can then be formed by combining multiple A-lines produced by 2D raster scanning. [ 3 ]
Altering delays of the elements on an ultrasound transducer allows one to focus ultrasound waves similar to passing through an acoustic lens. This delay-and-sum method enables one to find the signal at each focal point. However, the lateral resolution is limited by the presence of side lobes , which appear at polar angles and are dependent on the width of each element. [ 4 ]
In photoacoustic imaging modalities, including photoacoustic microscopy, contrast is based on photon excitation and is thus determined by the optical properties of the tissue. When an electron absorbs a photon, it moves to a higher energy state. Upon returning to a lower energy level , the electron undergoes either radiative or nonradiative relaxation. During radiative relaxation, the electron releases energy in the form of a photon. On the other hand, an electron undergoing nonradiative relaxation releases energy as heat. The heat then induces a pressure rise that propagates as a photoacoustic wave. Due to the fact that almost all molecules are capable of nonradiative relaxation, photoacoustic microscopy has the potential to image a wide range of endogenous and exogenous agents. By contrast, fewer molecules are capable of radiative relaxation, thus limiting fluorescence microscopy techniques such as one-photon and two-photon microscopy . [ 3 ] Current research in photoacoustic microscopy takes advantage of both endogenous and exogenous contrast agents to gain functional information about the body, from blood saturation levels to cancer proliferation rate.
Endogenous contrast agents, molecules naturally occurring within the body, are useful in photoacoustic microscopy due to the fact that they may be imaged non-invasively. Endogenous agents are also non-toxic and do not affect the properties of the tissue being studied. In particular, endogenous absorbers can be classified based on their absorbing wavelengths. [ 2 ]
Within the ultraviolet light range (λ = 180 to 400 nm), the primary absorber in the body is DNA and RNA . By using ultraviolet photoacoustic microscopy, DNA and RNA can be imaged in the cell nuclei without the use of fluorescence labeling. Since cancer is associated with DNA replication failure, UV photoacoustic microscopy has the potential to be used for early cancer detection. [ 5 ]
Visible light absorbers (λ = 400 to 700 nm) include oxyhemoglobin , deoxyhemoglobin , melanin , and cytochrome c . Visible light photoacoustic microscopy is particularly useful in determining hemoglobin concentration and oxygen saturation due to the difference in absorption profiles of oxyhemoglobin and deoxyhemoglobin. Real-time analysis can then be used to determine blood flow speed and oxygen metabolism rate. [ 3 ] In addition, photoacoustic microscopy is capable of early melanoma detection due to the high concentration of melanin found in skin cancer cells.
Near-Infrared absorbers (λ = 700 to 1400 nm) include water, lipids, and glucose. Photoacoustic determination of blood glucose levels can be used for treating diabetes, while studying lipid concentrations within blood vessels is important for monitoring the progression of atherosclerosis . [ 2 ] It is still feasible to quantify and compare deoxyhemoglobin and hemoglobin concentrations at this wavelength, trading deeper tissue penetration for lower absorption. [ 6 ]
Although endogenous contrasts agents are noninvasive and simpler to use, they are limited by their inherent behavior and concentration, making it difficult to monitor certain processes if optical absorption is weak. On the other hand, exogenous agents can be engineered to specifically bind to certain molecules of interest. In addition, the concentration of exogenous agents can be optimized to produce a greater signal and provide more contrast. Through selective binding, exogenous contrast agents are capable of targeting specific molecules of interest while also enhancing resulting images. [ 3 ]
Organic dyes, such as ICG -PEG and Evans blue , are used to enhance vasculature as well as to improve tumor imaging. In addition, dyes are easily filtered out of the body due to their small size (≤ 3 nm). [ 2 ]
Nanoparticles are currently being researched due to their chemical inactivity and ability to target tumor cells. These properties allow for cancer propagation to be monitored and potentially enables intraoperative cancer removal. However, more studies on short-term toxicity effects are necessary to determine if nanoparticles are suitable for clinical research. [ 2 ] Gold nanoparticles have shown promise as a contrast agent for image-guided medicine. AuNPs have been widely used as contrast agents due to their strong and tunable optical absorption. [ 7 ]
Fluorescent proteins have been developed for fluorescence microscopy imaging and are unique in that they can be genetically encoded and therefore do not need to be delivered into the body. Using photoacoustic microscopy, fluorescent proteins can be visualized at depths beyond the limit of typical microscopy methods. [ 2 ] Frequency-dependent acoustic attenuation in tissue and dampening of higher frequencies limits the bandwidth of light propagation through deeper regions in tissue. Fluorescent proteins act as light source at the target region, bypassing the limitation of optical attenuation . However, the effectiveness of fluorescent proteins is limited by low fluence changes, as the light diffusion equation predicts lower than 5% increase. [ 8 ]
Photoacoustic microscopy achieves greater penetration than conventional microscopy due to ultrasonic detection. As a result, axial resolution is defined acoustically and is determined by the formula:
R a x i a l = 0.88 ν A Δ f A , {\displaystyle R_{axial}=0.88{\frac {\nu _{A}}{\Delta f_{A}}},}
where ν A {\displaystyle \nu _{A}} is the speed of sound in the medium and Δ f A {\displaystyle \Delta f_{A}} is the photoacoustic signal bandwidth. The axial resolution of the system can be improved by using a wider bandwidth ultrasound transducer as long as the bandwidth matches that of the photoacoustic signal.
The lateral resolution of photoacoustic microscopy depends on the optical and acoustic foci of the system. Optical-resolution photoacoustic microscopy (OR-PAM) uses a tighter optical focus than acoustic focus, while acoustic-resolution photoacoustic microscopy (AR-PAM) uses a tighter acoustic focus than optical focus. [ 9 ] [ 10 ]
Due to a tighter optical focus, OR-PAM is more useful for imaging in the quasi-ballistic range of depths up to 1 mm. [ 9 ] The lateral resolution of OR-PAM is determined by the formula:
R l a t e r a l , O = 0.51 λ O N A O , {\displaystyle R_{lateral,O}=0.51{\frac {\lambda _{O}}{NA_{O}}},}
where λ O {\displaystyle \lambda _{O}} is the optical wavelength and N A O {\displaystyle NA_{O}} is the numerical aperture of the optical objective lens. [ 2 ] The lateral resolution of OR-PAM can be improved by using a shorter laser pulse and tighter focusing of the laser spot. OR-PAM systems can typically achieve a lateral resolution of 0.2 to 10 μm, allowing OR-PAM to be classified as a super-resolution imaging method.
At depths greater than 1 mm and up to 3 mm, acoustic-resolution photoacoustic microscopy (AR-PAM) is more useful due to greater optical scattering. Acoustic scattering is much weaker beyond the optical diffusion limit, making AR-PAM more practical as it provides higher lateral resolution at these depths. The lateral resolution of AR-PAM is determined by the formula:
R l a t e r a l , A = 0.71 λ A N A A , {\displaystyle R_{lateral,A}=0.71{\frac {\lambda _{A}}{NA_{A}}},}
where λ A {\displaystyle \lambda _{A}} is the central wavelength of the photoacoustic wave and N A A {\displaystyle NA_{A}} is the numerical aperture of the ultrasound transducer. [ 2 ] Higher lateral resolution can therefore be achieved by increasing the center frequency of the ultrasound transducer and tighter acoustic focusing. AR-PAM systems can typically achieve a lateral resolution of 15 to 50 μm.
By ignoring ballistic light, dark-field confocal photoacoustic microscopy reduces surface signal. This method uses a dark-field pulsed laser and high-NA ultrasonic detection, with the fiber output end coaxially aligned with the focused ultrasound transducer. Filtration of ballistic light relies on the altered shape of the excitation laser beam instead of an opaque disk, as used in conventional dark-field microscopy . The general reconstruction technique is used to convert the photoacoustic signal into one A-line, and B-line images are produced by raster scanning. [ 4 ]
Photoacoustic microscopy has a wide range of applications in the biomedical field. Due to its ability to image a variety of molecules based on optical wavelength, photoacoustic microscopy can be used to gain functional information about the body noninvasively. Blood flow dynamics and oxygen metabolic rates can be measured and correlated to studies of atherosclerosis or tumor proliferation . Exogenous agents can be used to bind to cancerous tissue, enhancing image contrast and aiding in surgical removal. On the same note, photoacoustic microscopy is useful in early cancer diagnosis due to the difference in optical absorption properties compared to healthy tissue. [ 1 ] | https://en.wikipedia.org/wiki/Photoacoustic_microscopy |
Photoacoustic spectroscopy is the measurement of the effect of absorbed electromagnetic energy (particularly of light ) on matter by means of acoustic detection. The discovery of the photoacoustic effect dates to 1880 when Alexander Graham Bell showed that thin discs emitted sound when exposed to a beam of sunlight that was rapidly interrupted with a rotating slotted disk. The absorbed energy from the light causes local heating , generating a thermal expansion which creates a pressure wave or sound. Later Bell showed that materials exposed to the non-visible portions of the solar spectrum (i.e., the infrared and the ultraviolet ) can also produce sounds.
A photoacoustic spectrum of a sample can be recorded by measuring the sound at different wavelengths of the light. This spectrum can be used to identify the absorbing components of the sample. The photoacoustic effect can be used to study solids , liquids and gases . [ 1 ]
Photoacoustic spectroscopy has become a powerful technique to study concentrations of gases at the part per billion or even part per trillion levels. [ 2 ] Modern photoacoustic detectors still rely on the same principles as Bell's apparatus; however, to increase the sensitivity , several modifications have been made.
Instead of sunlight, intense lasers are used to illuminate the sample since the intensity of the generated sound is proportional to the light intensity; this technique is referred to as laser photoacoustic spectroscopy (LPAS). [ 2 ] The ear has been replaced by sensitive microphones . The microphone signals are further amplified and detected using lock-in amplifiers . [ citation needed ] By enclosing the gaseous sample in a cylindrical chamber, the sound signal is amplified by tuning the modulation frequency to an acoustic resonance of the sample cell. [ citation needed ]
By using cantilever enhanced photoacoustic spectroscopy sensitivity can still be further improved enabling reliable monitoring of gases on ppb-level.
The following example illustrates the potential of the photoacoustic technique: In the early 1970s, Patel and co-workers [ 3 ] measured the temporal variation of the concentration of nitric oxide in the stratosphere at an altitude of 28 km with a balloon-borne photoacoustic detector. These measurements provided crucial data bearing on the problem of ozone depletion by man-made nitric oxide emission. Some of the early work relied on development of the RG theory by Rosencwaig and Gersho. [ 4 ] [ 5 ]
One of the important capabilities of using FTIR photoacoustic spectroscopy has been the ability to evaluate samples in their in situ state by infrared spectroscopy , which can be used to detect and quantify chemical functional groups and thus chemical substances . This is particularly useful for biological samples that can be evaluated without crushing to powder or subjecting to chemical treatments. Seashells, bone and such samples have been investigated. [ 6 ] [ 7 ] [ 8 ] Using photoacoustic spectroscopy has helped evaluate molecular interactions in bone with osteogenesis imperfecta. [ 9 ]
While most academic research has concentrated on high resolution instruments, some work has gone in the opposite direction. In the last twenty years, very low cost instruments for applications such as leakage detection and for the control of carbon dioxide concentration have been developed and commercialized. Typically, low cost thermal sources are used which are modulated electronically. Diffusion through semi-permeable disks instead of valves for gas exchange, low-cost microphones, and proprietary signal processing with digital signal processors have brought down the costs of these systems. The future of low-cost applications of photoacoustic spectroscopy may be the realization of fully integrated micromachined photoacoustic instruments.
The photoacoustic approach has been utilized to quantitatively measure macromolecules, such as proteins. The photoacoustic immunoassay labels and detects target proteins using nanoparticles that can generate strong acoustic signals. [ 10 ] The photoacoustics-based protein analysis has also been applied for point-of-care testings. [ 11 ]
Photoacoustic spectroscopy also has many military applications. One such application is the detection toxic chemical agents. The sensitivity of photoacoustic spectroscopy makes it an ideal analysis technique for detecting trace chemicals associated with chemical attacks. [ 12 ]
LPAS sensors may be applied in industry, security ( nerve agent and explosives detection), and medicine (breath analysis). [ 13 ] | https://en.wikipedia.org/wiki/Photoacoustic_spectroscopy |
Photoactivatable fluorescent proteins (PAFPs) is a type of fluorescent protein that exhibit fluorescence that can be modified by a light-induced chemical reaction .
The first PAFP, Kaede (protein) , was isolated from Trachyphyllia geoffroyi in a cDNA library screen designed to identify new fluorescent proteins. [ 1 ] A fluorescent green protein derived from this screen was serendipitously discovered to have sensitivity to ultraviolet light--
We happened to leave one of the protein aliquots on the laboratory bench overnight. The next day, we found that the protein sample on the bench had turned red, whereas the others that were kept in a paper box remained green. Although the sky had been partly cloudy, the red sample had been exposed to sunlight through the south-facing windows. [ 1 ]
Many PAFPs have been engineered from existing fluorescent proteins or identified from large-scale screens in the wake of Kaede's discovery. Many of these undergo green-to-red photoconversion, but other colors are available. Some proteins take part in irreversible photoconversion reactions while other reactions can be reversed using light of a specific wavelength.
Unlike other fluorescent proteins, PAFPs can be used as selective optical markers. An entirely labeled cell can be followed to assess cell division, migration, and morphology. Very small volumes containing PAFPs can be activated with a laser. In these cases, protein trafficking, diffusion, and turnover can be assessed. | https://en.wikipedia.org/wiki/Photoactivatable_fluorescent_protein |
Photoactivatable probes , or caged probes , are cellular players ( proteins , nucleic acids , small molecules ) that can be triggered by a flash of light. They are used in biological research to study processes in cells. The basic principle is to bring a photoactivatable agent (e.g. a small molecule modified with a light-responsive group: proteins tagged with an artificial photoreceptor protein ) to cells, tissues or even living animals and specifically control its activity by illumination. [ 1 ]
Light is a well-suited external trigger for these types of experiments since it is non-invasive and does not influence normal cellular processes (though care has to be taken when using light in the ultra-violet part of the spectrum to avoid DNA damage . Furthermore, light offers high spatial and temporal control. Usually, the activation stimulus comes from a laser or a UV lamp and can be incorporated into the same microscope used for monitoring of the effect. All these advantages have led to the development of a wide variety of different photoactivatable probes.
Even though the light-induced activation step is usually irreversible, reversible changes can be induced in a number of photoswitches .
The first reported use of photoprotected analogues for biological studies was the synthesis and application of caged ATP by Joseph F. Hoffman in 1978 [ 2 ] in his study of Na:K pumps . As of 2013, ATP is still the most commonly used caged compound. Hoffman was also the one to coin the term 'caged' for this type of modified molecules. This nomenclature persisted, despite it being scientifically a misnomer , since it suggests the idea of the molecule being in a physical cage (like in a Fullerene ). However, scientists have tried to introduce the newer, more accurate term 'photoactivatable probes'. Both nomenclatures are currently in use.
Major discoveries were made in the following years with caged neurotransmitters , such as glutamate , which is used to map functional neuronal circuits in mammalian brain slices . [ 3 ] Small molecules are easier to modify by photocleavable groups, compared to larger constructs such as proteins. Photoactivatable proteins were serendipitously discovered much later (in 2002), by the observation that Kaede protein , when left on the bench exposed to sunlight, changed fluorescence to longer wavelength. [ 4 ]
Proteins which sense and react to light were originally isolated from photoreceptors in algae , corals and other marine organisms. The two most commonly used photoactivatable proteins in scientific research, as of 2013, are photoactivatable fluorescent proteins and retinylidene proteins . Photoactivatable fluorescent proteins change to longer emission wavelength upon illumination with UV light. In Kaede, this change is brought upon by cleavage of the chromophore tripeptide His62-Tyr63-Gly64. [ 5 ] This discovery paved the way for modern super resolution microscopy techniques like PALM or STORM . Retinylidene proteins , such as Channelrhodopsins or Halorhodopsins , are light sensitive cation and chloride channels , which open during illumination with blue and yellow light, respectively. This principle has been successfully employed to control the activity of neurons in living cells and even tissue and gave rise to a whole new research field, optogenetics .
Nucleic acids play important roles as cellular information storage and gene regulation machinery. In efforts to regulate this machinery by light, DNA and RNA have been modified with photocleavable groups at the backbone (in an approach called ‘statistical backbone caging’; the protection groups react mainly with backbone phosphate groups). In the organism, modified nucleic acids are ‘silent’ and only upon irradiation with light can their activity be turned on. [ 6 ] This approach finds use in developmental biology , where the chronology of gene activity is of particular interest. Caged nucleic acids enable researchers to very precisely turn on genes of interest during the development of whole organisms. [ 7 ]
Small molecules are easily modified by chemical synthesis and therefore were among the first to be modified and used in biological studies. A wide variety of caged small molecules exist.
Photochemical reactions can convert a nonemissive reactant into a fluorescent product. [ 8 ] These reactions can be exploited in super-resolution microscopy to allow localization beyond the diffraction limit .
The advantages of activating effectors with light (precise control, fast response, high specificity, no cross-reactions) are particularly interesting in neurotransmitters. Caged dopamine , serotonin , glycine and GABA have been synthesized and their effect on neuronal activity has been extensively studied. [ 9 ]
Not only amino acids , but also ions can be caged. Since calcium is a potent cellular second messenger , caged variants have been synthesized by employing the ion-trapping properties of EDTA . Light-induced cleavage of the EDTA backbone leads to a wave of free calcium inside the cell. [ 10 ]
Another class of molecules used for transmitting signals in the cell is hormones . Caged derivates of estradiol were shown to induce gene expression upon uncaging. Other caged hormones were used to study receptor - ligand interactions. [ 11 ]
Lipids were shown to be involved in signaling . To dissect the roles that lipids have in certain pathways, it is advantageous to be able to increase the concentration of the signaling lipid in a very rapid manner. Therefore, many signaling lipids have been also protected with photoremovable protection groups and their effect on cellular signaling has been studied. Caged PI3P has been shown to induce endosomal fusion. [ 12 ] Caged IP3 helped elucidate the effect of IP3 on action potential [ 13 ] and caged diacylglycerol has been used to determine the influence of fatty acid chain length on PKC dependent signaling. [ 14 ]
When studying protein-lipid interactions , another type of photoactivation has proved to provide many insights. Photolabile groups such as diaziridines or benzophenones , which, upon UV irradiation leave behind a highly reactive carbenium ions , can be used to crosslink the lipid of interest to its interacting proteins. This methodology is especially useful to verify existing and discover new protein-lipid interactions. [ 15 ] | https://en.wikipedia.org/wiki/Photoactivatable_probes |
Photoactivated adenylyl cyclase (PAC) is a protein consisting of an adenylyl cyclase enzyme domain directly linked to a BLUF (blue light receptor using FAD) type light sensor domain. When illuminated with blue light, the enzyme domain becomes active and converts ATP to cAMP , an important second messenger in many cells. In the unicellular flagellate Euglena gracilis , PACα and PACβ (euPACs) serve as a photoreceptor complex that senses light for photophobic responses and phototaxis . [ 2 ] Small but potent PACs were identified in the genome of the bacteria Beggiatoa (bPAC) and Oscillatoria acuminata (OaPAC). [ 3 ] [ 1 ] While natural bPAC has some enzymatic activity in the absence of light, variants with no dark activity have been engineered (PACmn). [ 4 ]
As PACs consist of a light sensor and an enzyme in a single protein, they can be expressed in other species and cell types to manipulate cAMP levels with light. When bPAC is expressed in mouse sperm , blue light illumination speeds up the swimming of transgenic sperm cells and aids fertilization . [ 5 ] When expressed in neurons , illumination changes the branching pattern of growing axons . [ 6 ] PAC has been used in mice to clarify the function of neurons in the hypothalamus , which use cAMP signaling to control mating behavior. [ 7 ] Expression of PAC together with K + -specific cyclic-nucleotide-gated ion channels (CNGs) has been used to hyperpolarize neurons at very low light levels, which prevents them from firing action potentials. [ 8 ] [ 9 ]
Photoactivated guanylyl cyclases have been discovered in the aquatic fungi Blastocladiella emersonii [ 10 ] [ 11 ] and Catenaria anguillulae . [ 12 ] Unlike PACs, these light-activated cyclases use retinal as their light sensor and are therefore rhodopsin guanylyl cyclases (RhGC). When expressed in Xenopus oocytes or mammalian neurons , RhGCs generate cGMP in response to green light. [ 12 ] Therefore, they are considered useful optogenetic tools to investigate cGMP signaling. [ 13 ] | https://en.wikipedia.org/wiki/Photoactivated_adenylyl_cyclase |
Photo-activated localization microscopy ( PALM or FPALM ) [ 1 ] [ 2 ] and stochastic optical reconstruction microscopy (STORM) [ 3 ] are widefield (as opposed to point scanning techniques such as laser scanning confocal microscopy ) fluorescence microscopy imaging methods that allow obtaining images with a resolution beyond the diffraction limit . The methods were proposed in 2006 in the wake of a general emergence of optical super-resolution microscopy methods, and were featured as Methods of the Year for 2008 by the Nature Methods journal. [ 4 ] The development of PALM as a targeted biophysical imaging method was largely prompted by the discovery of new species and the engineering of mutants of fluorescent proteins displaying a controllable photochromism , such as photo-activatible GFP . However, the concomitant development of STORM, sharing the same fundamental principle, originally made use of paired cyanine dyes.
One molecule of the pair (called activator), when excited near its absorption maximum, serves to reactivate the other molecule (called reporter) to the fluorescent state.
A growing number of dyes are used for PALM, STORM and related techniques, both organic fluorophores and fluorescent proteins. Some are compatible with live cell imaging, others allow faster acquisition or denser labeling. The choice of a particular fluorophore ultimately depends on the application and on its underlying photophysical properties. [ 5 ]
Both techniques have undergone significant technical developments, [ 6 ] in particular allowing multicolor imaging and the extension to three dimensions, with the best current axial resolution of 10 nm in the third dimension obtained using an interferometric approach with two opposing objectives collecting the fluorescence from the sample. [ 7 ]
Conventional fluorescence microscopy is performed by selectively staining the sample with fluorescent molecules, either linked to antibodies as in immunohistochemistry or using fluorescent proteins genetically fused to the genes of interest. Typically, the more concentrated the fluorophores, the better the contrast of the fluorescence image.
A single fluorophore can be visualized under a microscope (or even under the naked eye [ 8 ] ) if the number of photons emitted is sufficiently high, and in contrast the background is low enough. The two dimensional image of a point source observed under a microscope is an extended spot, corresponding to the Airy disk (a section of the point spread function ) of the imaging system.
The ability to identify as two individual entities two closely spaced fluorophores is limited by the diffraction of light. This is quantified by Abbe ’s criterion, stating that the minimal distance d {\displaystyle d} that allows resolving two point sources is given by
d = λ 2 N A {\displaystyle d={\frac {\lambda }{2NA}}}
where λ {\displaystyle \lambda } is the wavelength of the fluorescent emission and NA is the numerical aperture of the microscope. The theoretical resolution limit at the shortest practical excitation wavelength is around 150 nm in the lateral dimension and approaching 400 nm in the axial dimension (if using an objective having a numerical aperture of 1.40 and the excitation wavelength is 400 nm).
However, if the emission from the two neighboring fluorescent molecules is made distinguishable, i.e. the photons coming from each of the two can be identified, then it is possible to overcome the diffraction limit. [ 9 ] Once a set of photons from a specific molecule is collected, it forms a diffraction-limited spot in the image plane of the microscope. The center of this spot can be found by fitting the observed emission profile to a known geometrical function, typically a Gaussian function in two dimensions. The error that is made in localizing the center of a point emitter scales to a first approximation as the inverse square root of the number of emitted photons, and if enough photons are collected it is easy to obtain a localization error much smaller than the original point spread function.
The two steps of identification and localization of individual fluorescent molecules in a dense environment where many are present are at the basis of PALM, STORM and their development.
Although many approaches to molecular identification exist, the light-induced photochromism of selected fluorophores developed as the most promising approach to distinguish neighboring molecules by separating their fluorescent emission in time. By turning on stochastically sparse subsets of fluorophores with light of a specific wavelength, individual molecules can then be excited and imaged according to their spectra. To avoid the accumulation of active fluorophores in the sample, which would eventually degrade back to a diffraction-limited image, the spontaneously occurring phenomenon of photobleaching is exploited in PALM, whereas reversible switching between a fluorescent on-state and a dark off-state of a dye is exploited in STORM.
In summary, PALM and STORM are based on collecting under a fluorescent microscope a large number of images each containing just a few active isolated fluorophores.
The imaging sequence allows for the many emission cycles necessary to stochastically activate each fluorophore from a non-emissive (or less emissive) state to a bright state, and back to a non-emissive or bleached state. During each cycle, the density of activated molecules is kept low enough that the molecular images of individual fluorophores do not typically overlap.
In each image of the sequence, the position of a fluorophore is calculated with a precision typically greater than the diffraction limit - in the typical range of a few to tens of nm - and the resulting information of the position of the centers of all the localized molecules is used to build up the super-resolution PALM or STORM image.
The localization precision σ {\displaystyle \sigma } can be calculated according to the formula:
σ = ( s i 2 + a 2 12 N ) ⋅ ( 16 9 + 8 π s i 2 b 2 a 2 N 2 ) {\displaystyle \sigma ={\sqrt {\left({\frac {s_{i}^{2}+{\frac {a^{2}}{12}}}{N}}\right)\cdot \left({\frac {16}{9}}+{\frac {8\pi s_{i}^{2}b^{2}}{a^{2}N^{2}}}\right)}}}
where N is the number of collected photons, a is the pixel size of the imaging detector, b 2 {\displaystyle b^{2}} is the average background signal and s i {\displaystyle s_{i}} is the standard deviation of the point spread function. [ 10 ] The requirement of localizing at the same time multiple fluorophores simultaneously over an extended area determines the reason why these methods are wide-field, employing as a detector a CCD , EMCCD or a CMOS camera.
The requirement for an enhanced signal-to-noise ratio to maximize localization precision determines the frequent combination of this concept with widefield fluorescent microscopes allowing optical sectioning, such as total internal reflection fluorescence microscopes (TIRF) and light sheet fluorescence microscopes .
The resolution of the final image is limited by the precision of each localization and the number of localizations, instead of by diffraction. The super resolution image is therefore a pointillistic representation of the coordinates of all the localized molecules. The super resolution image is commonly rendered by representing each molecule in the image plane as a two dimensional Gaussian with amplitude proportional to the number of photons collected, and the standard deviation depending on the localization precision.
The peculiar photophysical properties of the fluorophores employed in PALM/STORM super resolution imaging pose both constraints and opportunities for multicolor imaging.
Three strategies have emerged so far: excitation of spectrally separated fluorophores
using an emission beamsplitter, [ 12 ] using of multiple activators/reporters in STORM mode [ 13 ] [ 14 ] and ratiometric imaging of spectrally close fluorophores. [ 15 ]
Although originally developed as 2D (x,y) imaging methods, PALM and STORM have quickly developed into 3D (x,y,z) capable techniques. To determine the axial position of a single fluorophore in the sample the following approaches are currently being used: modification of the point spread function to introduce z-dependent features in the 2D (x,y) image (the most common approach is to introduce astigmatism in the PSF); multiplane detection , where the axial position is determined by comparing two images of the same PSF defocused one with respect to the other; interferometric determination of the axial position of the emitter using two opposed objectives and multiple detectors; [ 7 ] use of temporal focusing to confine the excitation/activation; use of light sheet excitation/activation to confine to a few hundred nanometers thick layer arbitrarily positioned along the z-plane within the sample.
The requirement for multiple cycles of activation, excitation and de-activation/bleaching would typically imply extended periods of time to form a PALM/STORM image, and therefore operation on a fixed sample. A number of works have been published as early as 2007 [ 16 ] performing PALM/STORM on live cells.
The ability to perform live super-resolution imaging using these techniques ultimately depends on the technical limitations of collecting enough photons from a single emitter in a very short time. This depends both on the photophysical limitations of the probe as well as on the sensitivity of the detector employed. Relatively slow (seconds to tens of seconds) processes such as modification in the organization of focal adhesions have been investigated by means of PALM, [ 17 ] whereas STORM has allowed imaging of faster processes such as membrane diffusion of clathrin coated pits or mitochondrial fission/fusion processes.
A promising application of live cell PALM is the use of photoactivation to perform high-density single-particle tracking (sptPALM [ 18 ] ), overcoming the traditional limitation of single particle tracking to work with systems displaying a very low concentration of fluorophores.
While traditional PALM and STORM measurements are used to determine the physical structure of a sample, with the intensities of fluorescent events determining the certainty of the localization, these intensities can also be used to map fluorophore interactions with nanophotonic structures. This has been performed on both metallic ( plasmonic ) structures, such as gold nanorods, [ 19 ] [ 20 ] as well as semiconducting structures, such as silicon nanowires. [ 21 ] These approaches can either be used for fluorophores functionalized on the surface of the sample of interest (as for the plasmonic particle studies mentioned here), or randomly adsorbed onto the substrate surrounding the sample, allowing full 2D mapping of fluorophore-nanostructure interactions at all positions relative to the structure. [ 21 ]
These studies have found that, in addition to the standard uncertainty of localization due to the point spread function fitting, self-interference with light scattered by nanoparticles can lead to distortions or displacements of the imaged point spread functions, [ 20 ] [ 21 ] complicating the analysis of such measurements. These may be possible to limit, however, for example by incorporating metasurface masks which control the angular distribution of light permitted into the measurement system. [ 22 ]
PALM and STORM share a common fundamental principle, and numerous developments have tended to make the two techniques even more intertwined. Still, they differ in several technical details and a fundamental point.
On the technical side, PALM is performed on a biological specimen using fluorophores expressed exogenously in the form of genetic fusion constructs to a photoactivatable fluorescent protein. STORM instead uses immunolabeling of endogenous molecules in the sample with antibodies tagged with organic fluorophores.
In both cases the fluorophores are driven between an active-ON and an inactive-OFF state by light. In PALM, however, photoactivation and photobleaching confine the life of the fluorophore to a limited interval of time, and a continuous emission of the fluorophore is desirable in between without any fluorescence intermittency. In STORM stochastic photoblinking of the organic fluorophores (typically brighter than fluorescent proteins) was originally exploited to separate neighboring dyes. In this respect, the more robust the blinking, the higher the probability of distinguishing two neighbouring fluorophores.
In this respect, several research works have explored the potential of PALM to perform a quantitation of the number of fluorophores (and therefore proteins of interest) present in a sample by counting the activated fluorophores. [ 11 ] [ 23 ] [ 24 ] The approach used to treat the fluorescent dynamics of the fluorescent label used in the experiments will determine the final appearance of the super-resolution image, and the possibility of determining an unambiguous correspondence between a localization event and a protein in the sample. | https://en.wikipedia.org/wiki/Photoactivated_localization_microscopy |
Photoaffinity labeling is a chemoproteomics technique used to attach "labels" to the active site of a large molecule, especially a protein . The "label" attaches to the molecule loosely and reversibly, and has an inactive site which can be converted using photolysis into a highly reactive form, which causes the label to bind more permanently to the large molecule via a covalent bond. [ 1 ] [ 2 ] The technique was first described in the 1970s. [ 3 ] Molecules that have been used as labels in this process are often analogs of complex molecules, in which certain functional groups are replaced with a photoreactive group, such as an azide , a diazirine or a benzophenone . [ 4 ] [ 5 ] [ 6 ] | https://en.wikipedia.org/wiki/Photoaffinity_labeling |
Photoanalysis (or photo analysis) refers to the study of pictures to compile various types of data, for example, to measure the size distribution of virtually anything that can be captured by photo. Photoanalysis technology has changed the way mines and mills quantify fragmented material.
Images are a good way to document conditions before, after, and even during blasting activities. The technology is advancing at a high rate, and lenses, storage media memory, light sensitivity and resolution have been improving steadily. Today's digital cameras and camcorders include high-resolution optics, compact size, automatic time and date stamps, good battery life, shutters to freeze motion, and computers to autofocus and eliminate jitter using image stabilization . [ 1 ]
Photoanalysis in mining operations can provide an automated system that forewarns a company of potential problems with materials, leading to economies and reduced damage caused from over-sized materials. It can also help determine the effectiveness of blasts. [ 2 ]
A company can use this technology to monitor materials moving on a conveyor belt in an underground environment, to measure piles left over from a blast, and even measure the amount of material being carried by dump trucks or vessels to a destination.
Photoanalysis is being used on SAG mills worldwide to control the size of rock being crushed. [ 3 ] Companies are using this technology to determine the size of particles being processed in the SAG Mill. [1] Having oversize material entering the SAG mill makes an operation less efficient, costing companies money in electrical and maintenance costs. Photoanalysis technology can eliminate unwanted material before it enters the mill, keeping rock crushing costs low. [ 4 ]
Wood chip size can affect the overall quality of a product. With automated photoanalysis systems, companies can remove any unwanted wrong-size particles without stopping their mill process. [ 5 ]
Photoanalysis can affect how efficiently forestry companies operate. In mills worldwide, photoanalysis technology is improving the use of lumber products, cutting back on the amount of trees being used to operate, and saving companies money through quality control optimization. [2]
With the current downturn in the North American forestry industry, operators are looking at making their mills more efficient and effective when processing materials. Photoanalysis technology helps identify any weaknesses in the process by continuously monitoring different sections of an operation.
Agricultural companies can, using photoanalysis, monitor conveyor belts of food without contaminating the product by touching it. Other benefits of photoanalysis systems include:
The importance of photoanalysis technology is being noticed by the agricultural industry as it identifies any unwanted materials going through the process. In an example, if a mouse is on a conveyor of corn, photoanalysis technology would be able to identify the unwanted object and remove it before it contaminates the whole process.
Photoanalysis technology was created by using the Waterloo Image Enhancement Process in the 1980s. After further development of the imaging process with explosives producer DuPont, engineers Tom Palangio and Takis Katsabanis began selling photoanalysis software commercially. They later renamed the process WipFrag, standing for W aterloo I mage P rocess Frag mentation
Today, photoanalysis technology has evolved into stabilized and portable systems that can automatically capture and analyze results instantly. Thousands of these products are currently being used around the world to measure fragmented material.
Fragmentation analysis is becoming a popular term in mining, agricultural and forestry industries. With the majority of money in these industries directed towards the proper sizing of materials, companies are using fragmentation analysis to determine various factors within an operation. [3]
The two main ways a company keeps track of fragmented material are through manual and automated sieving procedures. Manual sieving involves extracting a sample of material to analyze the size distribution. The results can be tabulated within two days. Automated sieving is an advanced way of sieving materials running through a process. Without having to extract the material, photoanalysis can take place, allowing for immediate results with pinpoint accuracy.
Operators are using fragmentation analysis to determine the effectiveness of various blasts . With automated sieving technology, workers can track the success of these blasts and receive instant results. Companies are using these results to determine what blasting method yielded the best results for their specific operation. The common variables associated with blast optimization are the provided Particle Size Distribution (PSD) from a shovel fragmentation system, geology including rock type and fracturing, and energy factor.
By using photoanalysis the fragmented materials can be monitored, offering pinpoint accuracy and allowing mine operators to make adjustments to future blasting procedures. See Optical Granulometry to view the automated sieving process.
Maintenance costs can be significantly reduced if an operation focuses on the fragmentation of the particles passing through their process. Automated sieving systems can detect and help remove any oversize material before it enters the crusher and causes maintenance problems. It also helps determine the effectiveness of the mining process prior to crushing; the sizing of material is always a critical part of operations in the mining, forestry and agricultural industries.
Having an analysis taking place at every major point in an operation allows for the proper tracking of material being processed. Engineers can then determine what part of the process needs improving based solely on the size of material.
Measuring how effective industrial crushers are, can help save a company millions of dollars in energy costs on an annual basis. There are two components that affect a typical crusher: the size of the material inputted, and the speed at which the crusher is moving. If the user can find a perfect balance between these two components, the materials will be crushed to the right size in the shortest time possible.
Meeting the material standards set by governments and large companies can be hard. Having a post-crushing analysis taking place ensures that no oversize material gets shipped; eliminating the chance of getting fined for not meeting industry specifications. [ 6 ] | https://en.wikipedia.org/wiki/Photoanalysis |
In botany , a photoassimilate is one of a number of biological compounds formed by assimilation using light-dependent reactions . This term is most commonly used to refer to the energy-storing monosaccharides produced by photosynthesis in the leaves of plants. [ 1 ]
Only NADPH, ATP and water are made in the "light" reactions. Monosaccharides, though generally more complex sugars, are made in the "dark" reactions. The term "light" reaction can be confusing as some "dark" reactions require light to be active. [ citation needed ]
Photoassimilate movement through plants from "source to sink" using xylem and phloem is of biological significance. This movement is mimicked by many infectious particles - namely viroids - to accomplish long ranged movement and consequently infection of an entire plant.
This photosynthesis article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Photoassimilate |
Photoautotrophs are organisms that can utilize light energy from sunlight and elements (such as carbon ) from inorganic compounds to produce organic materials needed to sustain their own metabolism (i.e. autotrophy ). Such biological activities are known as photosynthesis , and examples of such organisms include plants , algae and cyanobacteria .
Eukaryotic photoautotrophs absorb photonic energy through the photopigment chlorophyll (a porphyrin derivative ) in their endosymbiont chloroplasts , while prokaryotic photoautotrophs use chlorophylls and bacteriochlorophylls present in free-floating cytoplasmic thylakoids . Plants, algae, and cyanobacteria perform oxygenic photosynthesis that produces oxygen as a byproduct , while some bacteria perform anoxygenic photosynthesis .
Chemical and geological evidence indicate that photosynthetic cyanobacteria existed about 2.6 billion years ago and anoxygenic photosynthesis had been taking place since a billion years before that. [ 1 ] Oxygenic photosynthesis was the primary source of free oxygen and led to the Great Oxidation Event roughly 2.4 to 2.1 billion years ago during the Neoarchean - Paleoproterozoic boundary. [ 2 ] Although the end of the Great Oxidation Event was marked by a significant decrease in gross primary productivity that eclipsed extinction events, [ 3 ] the development of aerobic respiration enabled more energetic metabolism of organic molecules, leading to symbiogenesis and the evolution of eukaryotes , and allowing the diversification of complex life on Earth.
Prokaryotic photoautotrophs include Cyanobacteria , Pseudomonadota , Chloroflexota , Acidobacteriota , Chlorobiota , Bacillota , Gemmatimonadota , and Eremiobacterota. [ 4 ]
Cyanobacteria is the only prokaryotic group that performs oxygenic photosynthesis . Anoxygenic photosynthetic bacteria use PSI - and PSII -like photosystems , which are pigment protein complexes for capturing light. [ 5 ] Both of these photosystems use bacteriochlorophyll . There are multiple hypotheses for how oxygenic photosynthesis evolved. The loss hypothesis states that PSI and PSII were present in anoxygenic ancestor cyanobacteria from which the different branches of anoxygenic bacteria evolved. [ 5 ] The fusion hypothesis states that the photosystems merged later through horizontal gene transfer . [ 5 ] The most recent hypothesis suggests that PSI and PSII diverged from an unknown common ancestor with a protein complex that was coded by one gene. These photosystems then specialized into the ones that are found today. [ 4 ]
Eukaryotic photoautotrophs include red algae , haptophytes , stramenopiles , cryptophytes , chlorophytes , and land plants . [ 6 ] These organisms perform photosynthesis through organelles called chloroplasts and are believed to have originated about 2 billion years ago. [ 1 ] Comparing the genes of chloroplast and cyanobacteria strongly suggests that chloroplasts evolved as a result of endosymbiosis with cyanobacteria that gradually lost the genes required to be free-living. However, it is difficult to determine whether all chloroplasts originated from a single, primary endosymbiotic event, or multiple independent events. [ 1 ] Some brachiopods ( Gigantoproductus ) and bivalves ( Tridacna ) also evolved photoautotrophy. [ 7 ] | https://en.wikipedia.org/wiki/Photoautotroph |
Photobiology is the scientific study of the beneficial and harmful interactions of light (technically, non-ionizing radiation ) in living organisms . [ 1 ] The field includes the study of photophysics, photochemistry, photosynthesis , photomorphogenesis , visual processing , circadian rhythms , photomovement, bioluminescence , and ultraviolet radiation effects. [ 2 ]
The division between ionizing radiation and non-ionizing radiation is typically considered to be a photon energy greater than 10 eV, [ 3 ] which approximately corresponds to both the first ionization energy of oxygen, and the ionization energy of hydrogen at about 14 eV. [ 4 ]
When photons come into contact with molecules, these molecules can absorb the energy in photons and become excited. Then they can react with molecules around them and stimulate " photochemical " and "photophysical" changes of molecular structures. [ 1 ]
This area of Photobiology focuses on the physical interactions of light and matter. When molecules absorb photons that matches their energy requirements they promote a valence electron from a ground state to an excited state and they become a lot more reactive. This is an extremely fast process, but very important for different processes. [ 5 ]
Source: [ 6 ]
This area of Photobiology studies the reactivity of a molecule when it absorbs energy that comes from light. It also studies what happens with this energy, it could be given off as heat or fluorescence so the molecule goes back to ground state.
There are 3 basic laws of photochemistry:
1) First Law of Photochemistry: This law explains that in order for photochemistry to happen, light has to be absorbed.
2) Second Law of Photochemistry: This law explains that only one molecule will be activated by each photon that is absorbed.
3) Bunsen-Roscoe Law of Reciprocity: This law explains that the energy in the final products of a photochemical reaction will be directly proportional to the total energy that was initially absorbed by the system.
Plant growth and development is highly dependent on light . Photosynthesis is one of the most important biochemical processes for life on earth and its possible only due to the ability of plants to use energy from photons and convert it into molecules such as NADPH and ATP , to then fix carbon dioxide and make it into sugars that plants can use for their growth and development. [ 7 ] But photosynthesis is not the only plant process driven by light, other processes such as photomorphology and plant photoperiod are extremely important for regulation of vegetative and reproductive growth as well as production of plant secondary metabolites . [ 8 ]
Photosynthesis is defined as a series of biochemical reactions that phototrophic cells perform to transform light energy to chemical energy and store it in carbon-carbon bonds of carbohydrates . [ 9 ] As it is widely known, this process happens inside of the chloroplast of photosynthetic plant cells where light absorbing pigments can be found embedded in the membranes of structures called thylakoids . [ 9 ] There are 2 main pigments present in the Photosystems of higher plants : chlorophyll (a or b) and carotenes . [ 7 ] These pigments are organized to maximize the light reception and transfer, and they absorb specific wavelengths to broaden the amount of light that can be captured and used for photo- redox reactions . [ 7 ]
Due to the limited amount of pigments in plant photosynthetic cells, there is a limited range of wavelengths that plants can use to perform photosynthesis. This range is called "Photosynthetically Active Radiation (PAR)". This range is almost the same as the human visible spectrum and it extends in wavelengths from approximately 400-700 nm. [ 10 ] PAR is measured in μmol s −1 m −2 and it measures the rate and intensity of radiant light in terms of micro-moles per unit of surface area and time that plants can use for photosynthesis. [ 11 ]
Photobiologically Active Radiation (PBAR) is a range of light energy beyond and including PAR . Photobiological Photon Flux (PBF) is the metric used to measure PBAR.
This process refers to the development of the morphology of plants which is light-mediated and controlled by 5 distinct photoreceptors: UVR8, Cryptochrome, Phototropin, Phytochrome r and Phytochrome fr. [ 12 ] Light can control morphogenic processes such as leaf size and shoot elongation.
Different wavelengths of light produce different changes in plants. [ 13 ] Red to Far Red light for example, regulates stem growth and straightening of the seedling shoots that are coming out of the ground. [ 14 ] Some studies also claim that red and far red light increases the rooting mass of tomatoes [ 15 ] as well as the rooting percentage of grape plants. [ 16 ] On the other hand, blue and UV light regulate the germination and elongation of the plant as well as other physiological processes such as stomatal control [ 17 ] and responses to environmental stress. [ 18 ] Finally, green light was thought not to be available to plants due to the lack of pigments that would absorb this light. However, in 2004 it was found that green light can influence stomatal activity, stem elongation of young plants and leaf expansion. [ 19 ]
These compounds are chemicals that plants produce as part of their biochemical processes and help them perform certain functions as well as protect themselves from different environmental factors. In this case, some metabolites such as anthocyanins, flavonoids, and carotenes, can accumulate in plant tissues to protect them from UV radiation and very high light intensity [ 20 ] | https://en.wikipedia.org/wiki/Photobiology |
A photobioreactor (PBR) refers to any cultivation system designed for growing photoautotrophic organisms using artificial light sources or solar light to facilitate photosynthesis. Photobioreactors are typically used to cultivate microalgae , cyanobacteria , and some mosses . [ 1 ] Photobioreactors can be open systems, such as raceway ponds , which rely upon natural sources of light and carbon dioxide . Closed photobioreactors are flexible systems that can be controlled to the physiological requirements of the cultured organism, resulting in optimal growth rates and purity levels. Photobioreactors are typically used for the cultivation of bioactive compounds for biofuels , pharmaceuticals, and other industrial uses. [ 2 ]
The first approach for the controlled production of phototrophic organisms was a natural open pond or artificial raceway pond . Therein, the culture suspension, which contains all necessary nutrients and carbon dioxide, is pumped around in a cycle, being directly illuminated from sunlight via the liquid's surface. Raceway ponds are still commonly used in industry due to their low operational cost in comparison to closed photobioreactors. However, they offer an insufficient control of reaction conditions due to their reliance on environmental light supply and carbon dioxide , as well as possible contamination from other microorganisms. Using open technologies also result in losses of water due to evaporation into the atmosphere. [ 3 ]
The construction of closed photobioreactors avoids system-related water losses and minimises contamination. [ 4 ] Though closed systems have better productivity compared to open systems due to this, they still need to be improved to make them suitable for production of low price commodities as cell density remains low due to several limiting factors. [ 5 ] All modern photobioreactors have tried to balance between a thin layer of culture suspension, optimized light application, low pumping energy consumption, capital expenditure and microbial purity. However, light attenuation and increased carbon dioxide requirements with growth are the two most inevitable changes in phototrophic cultures that severely limits productivity of photobioreactors. [ 6 ] [ 5 ] The accumulation of photosynthetic oxygen with growth of microalgae in photobioreactors is also believed to be a significant limiting factor; however, it has been recently shown with the help of kinetic models that dissolved oxygen levels as high as 400% air saturation are not inhibitory when cell density is high enough to attenuate light at later stages of microalgal cultures. [ 7 ] Many different systems have been tested, but only a few approaches were able to perform at an industrial scale. [ 8 ]
The simplest approach is the redesign of the well-known glass fermenters , which are state of the art in many biotechnological research and production facilities worldwide. The moss reactor for example shows a standard glass vessel, which is externally supplied with light. The existing head nozzles are used for sensor installation and for gas exchange. [ 9 ] This type is quite common in laboratory scale, but it has never been established in bigger scale, due to its limited vessel size.
Made from glass or plastic tubes, this photobioreactor type has succeeded within production scale. The tubes are oriented horizontally or vertically and are supplied from a central utilities installation with pump, sensors, nutrients and carbon dioxide . Tubular photobioreactors are established worldwide from laboratory up to production scale, e.g. for the production of the carotenoid Astaxanthine from the green algae Haematococcus pluvialis or for the production of food supplement from the green algae Chlorella vulgaris . These photobioreactors take advantage from the high purity levels and their efficient outputs. The biomass production can be done at a high quality level and the high biomass concentration at the end of the production allows energy efficient downstream processing. [ 10 ] Due to the recent prices of the photobioreactors, economically feasible concepts today can only be found within high-value markets, e.g. food supplement or cosmetics. [ 11 ]
The advantages of tubular photobioreactors at production scale are also transferred to laboratory scale. A combination of the mentioned glass vessel with a thin tube coil allows relevant biomass production rates at laboratory research scale. Being controlled by a complex process control system the regulation of the environmental conditions reaches a high level. [ 12 ]
An alternative approach is shown by a photobioreactor, which is built in a tapered geometry and which carries a helically attached, translucent double hose circuit system. [ 13 ] The result is a layout similar to a Christmas tree. The tubular system is constructed in modules and can theoretically be scaled outdoors up to agricultural scale. A dedicated location is not crucial, similar to other closed systems, and therefore non-arable land is suitable as well. The material choice should prevent biofouling and ensure high final biomass concentrations. The combination of turbulence and the closed concept should allow a clean operation and a high operational availability. [ 14 ]
Another development approach can be seen with the construction based on plastic or glass plates. Plates with different technical design are mounted to form a small layer of culture suspension, which provides an optimized light supply. In addition, the simpler construction compared to tubular reactors allows the use of less expensive plastic materials. From the pool of different concepts e.g. meandering flow designs or bottom gassed systems have been realized and shown good output results. Some unsolved issues are material life time stability or the biofilm forming. Applications at industrial scale are limited by the scalability of plate systems. [ 15 ]
In April 2013, the IBA in Hamburg, Germany, a building with an integrated glass plate photobioreactor facade, was commissioned. [ 16 ]
This established photobioreactor also has a plate shape. The proprietary geometry of the reactor is characterized in particular by the optimal light input with simultaneous shear-free mixing of the culture.
The variably adjustable CO 2 air mixture is introduced at the bottom of the photobioreactor through a special membrane in a large number of small air bubbles. The rising of the air bubbles in the specially shaped plate reactor creates a homogeneous mixing of the culture and, on the one hand, a very long residence time of the CO 2 -air mixture and thus a very good CO 2 input (degree of utilization) into the culture. On the other hand, the homogeneous mixing ensures a very good light input of the grow-light LEDs usually installed on both sides of the system and thus a very high utilization of the light energy.
Since the geometry of the reactor integrates one or more down chambers that transport the culture from the top area around to the bottom area, the culture is constantly homogeneously supplied with the photosynthesis-relevant factors, thus achieving a high productivity.
The reactor was developed at the renowned Fraunhofer Institute in Germany and manufactured by Subitec GmbH.
This photobioreactor type consists of a plate-shaped basic geometry with peaks and valleys arranged in regular distance. This geometry causes the distribution of incident light over a larger surface which corresponds to a dilution effect. This also helps solving a basic problem in phototrophic cultivation, because most microalgae species react sensitively to high light intensities. Most microalgae experience light saturation already at light intensities, ranging substantially below the maximum daylight intensity of approximately 2000 W/m 2 . Simultaneously, a larger light quantity can be exploited in order to improve photoconversion efficiency. The mixing is accomplished by a rotary pump, which causes a cylindrical rotation of the culture broth. In contrast to vertical designs, horizontal reactors contain only thin layers of media with a correspondingly low hydrodynamic pressure. This has a positive impact on the necessary energy input and reduces material costs at the same time.
The pressure of market prices has led the development of foil-based photobioreactor types. Inexpensive PVC or PE foils are mounted to form bags or vessels which cover algae suspensions and expose them to light. The pricing ranges of photobioreactor types have been enlarged with the foil systems. It has to be kept in mind that these systems have a limited sustainability as the foils have to be replaced from time to time. For full balances, the investment for required support systems has to be calculated as well. [ 17 ]
Porous substrate bioreactor [ 18 ] (PSBR), being developed at University of Cologne, also known as the twin-layer system, uses a new principle to separate the algae from a nutrient solution by means of a porous reactor surface on which the microalgae are trapped in biofilms. This new procedure reduces by a factor of up to one hundred the amount of liquid needed for operation compared to the current technology, which cultivates algae in suspensions. As such, the PSBR procedure significantly reduces the energy needed while increasing the portfolio of algae that can be cultivated.
The discussion around microalgae and their potentials in carbon dioxide sequestration and biofuel production has caused high pressure on developers and manufacturers of photobioreactors. [ 19 ] Today, none of the mentioned systems is able to produce phototrophic microalgae biomass at a price which is able to compete with crude oil [ citation needed ] . New approaches test e.g. dripping methods to produce ultra-thin layers for maximal growth with application of flue gas and waste water. Further on, much research is done worldwide on genetically modified and optimized microalgae. | https://en.wikipedia.org/wiki/Photobioreactor |
Photobiotin is a derivative of biotin used as a biochemical tool. It is composed of a biotin group, a linker group, and a photoactivatable aryl azide group.
The photoactivatable group provides nonspecific labeling of proteins, DNA and RNA probes or other molecules. Biotinylation of DNA and RNA with photoactivatable biotin is easier and less expensive than enzymatic methods since the DNA and RNA does not degrade. Photobiotin is most effectively activated by light at 260-475 nm . | https://en.wikipedia.org/wiki/Photobiotin |
Photoblasticism is a mechanism of seed dormancy . Photoblastic seeds require light in order to germinate . [ 2 ] Once germination starts, the stored nutrients that have accumulated during maturation start to be digested which then supports cell expansion and overall growth. [ 3 ] Within light-stimulated germination, Phytochrome B (PHYB) is the photoreceptor that is responsible for the beginning stages of germination. When red light is present, PHYB is converted to its active form and moves from the cytoplasm to the nucleus where it upregulates the degradation of PIF1. PIF1, phytochrome-interaction-factor-1, negatively regulates germination by increasing the expression of proteins that repress the synthesis of gibberellin (GA), a major hormone in the germination process. [ 4 ] Another factor that promotes germination is HFR1 which accumulates in light in some way and forms inactive heterodimers with PIF1. [ 5 ]
Although the exact mechanism is not known, nitric oxide (NO) plays a role in this pathway as well. NO is thought to repress PIF1 gene expression and stabilises HFR1 in some way to support the start of germination. [ 3 ] Bethke et al. (2006) exposed dormant Arabidopsis seeds to NO gas and within the next 4 days, 90% of the seeds broke dormancy and germinated. The authors also looked at how NO and GA effects the vacuolation process of aleurone cells that allow the movement of nutrients to be digested. A NO mutant resulted in inhibition of vacuolation but when GA was later added the process was active again leading to the belief that NO is prior to GA in the pathway. NO may also lead to the decrease in sensitivity of Abscisic acid (ABA), a plant hormone largely responsible for seed dormancy. [ 6 ] The balance between GA and ABA is important. When ABA levels are higher than GA then that leads to dormant seeds and when GA levels are higher, seeds germinate. [ 7 ] GA known to substitute the requirement of light for germination in positive photoblastic seeds. [ 8 ] | https://en.wikipedia.org/wiki/Photoblasticism |
In optics , photobleaching (sometimes termed fading) is the photochemical alteration of a dye or a fluorophore molecule such that it is permanently unable to fluoresce. This is caused by cleaving of covalent bonds or non-specific reactions between the fluorophore and surrounding molecules. [ 1 ] [ 2 ] Such irreversible modifications in covalent bonds are caused by transition from a singlet state to the triplet state of the fluorophores. The number of excitation cycles to achieve full bleaching varies. In microscopy , photobleaching may complicate the observation of fluorescent molecules, since they will eventually be destroyed by the light exposure necessary to stimulate them into fluorescing. This is especially problematic in time-lapse microscopy .
However, photobleaching may also be used prior to applying the (primarily antibody -linked) fluorescent molecules, in an attempt to quench autofluorescence . This can help improve the signal-to-noise ratio .
Photobleaching may also be exploited to study the motion and/or diffusion of molecules, for example via the FRAP , in which movement of cellular components can be confirmed by observing a recovery of fluorescence at the site of photobleaching, or FLIP techniques, in which multiple rounds of photobleaching is done so that the spread of fluorescence loss can be observed in cell.
Loss of activity caused by photobleaching can be controlled by reducing the intensity or time-span of light exposure, by increasing the concentration of fluorophores, by reducing the frequency and thus the photon energy of the input light, or by employing more robust fluorophores that are less prone to bleaching (e.g. Cyanine Dyes, Alexa Fluors or DyLight Fluors , AttoDyes, Janelia Dyes and others). To a reasonable approximation, a given molecule will be destroyed after a constant exposure (intensity of emission X emission time X number of cycles) because, in a constant environment, each absorption-emission cycle has an equal probability of causing photobleaching.
Photobleaching is an important parameter to account for in real-time single-molecule fluorescence imaging in biophysics . At light intensities used in single-molecule fluorescence imaging (0.1-1 kW/cm 2 in typical experimental setups), even most robust fluorophores continue to emit for up to 10 seconds before photobleaching in a single step. For some dyes, lifetimes can be prolonged 10-100 fold using oxygen scavenging systems (up to 1000 seconds with optimisation of imaging parameters and signal-to-noise). For example, a combination of Protocatechuic acid (PCA) and protocatechuate 3,4-dioxygenase (PCD) is often used as oxygen scavenging system, and that increases fluorescence lifetime by more than a minute.
Depending on their specific chemistry, molecules can photobleach after absorbing just a few photons, while more robust molecules can undergo many absorption/emission cycles before destruction:
This use of the term "lifetime" is not to be confused with the "lifetime" measured by fluorescence lifetime imaging . | https://en.wikipedia.org/wiki/Photobleaching |
A photocarcinogen is a substance which causes cancer when an organism is exposed to it, then illuminated. Many chemicals that are not carcinogenic can be photocarcinogenic when combined with exposure to light, especially UV . This can easily be understood from a photochemical perspective: The reactivity of a chemical substance itself might be low, but after illumination it transitions to an excited state, which is chemically much more reactive and therefore potentially harmful to biological tissue and DNA. Light can also split photocarcinogens, releasing free radicals , whose unpaired electrons cause them to be extremely reactive.
The type of UV radiation determines the characteristics of photocarcinogenesis. For example, UVA radiation characteristically gives rise to reactive oxygen species (ROS) such as hydrogen peroxide whereas UVB radiation correlates with CPD lesions. [ 1 ] The ROS are produced when endogenous photosensitizers are stimulated by UVA radiation. [ 1 ] DNA absorption of UV radiation primarily leads to CPD and 6-4 lesions. The neighboring pyrimidines form a cyclobutane pyrimidine dimer in a CPD lesion. DNA absorption of UV radiation can also lead to TC, CC, and TT lesions but with much less frequency. The failure of DNA repair mechanisms to fix such lesions notably characterizes photocarcinogenesis. [ 2 ]
In addition, UV radiation often increases the production of cytokines such as interleukin-10 which indirectly hinder antigen presentation in cells. Moreover, UV radiation frequently leads to mutations in the tumor suppressor gene p53 in photocarcinogenesis. [ 3 ]
Determination of photocarcinogenicity can be accomplished using different techniques, including epidemiological studies and in-vivo studies. In one in-vivo technique, hairless mice are exposed to suspected photocarcinogens, and are then exposed to different wavelengths of light, ranging from visible to UV-B. [ 4 ] Tumor incidence is compared to control mice that have not been exposed to the drug or chemical being tested.
Melanin is not a photocarcinogen, because it dissipates the excitation energy into small amounts of heat (see photoprotection ). Oxybenzone (a component of some sunscreens ) is suspected owing to its skin penetrating qualities and its production of free radicals. One medication that has been proven to be photocarcinogenic is psoralen . This drug is used in photodynamic therapy for many inflammatory skin conditions, where the drug is combined with skin exposure to UV light. Epidemiological studies dating back to the 1970s have shown a strong association between psoralen treatment and skin cancer incidence 5 to 15 years afterwards. [ 5 ] A logistic regression study has shown that there is a positive association between citrus consumption, both in the form of fruit and fruit juice, and risk of developing melanoma. This increased risk is most profound in fair-skinned individuals. The reason for this correlation is the high concentration of the photocarcinogenic compound psoralen in citrus fruits. [ 6 ] | https://en.wikipedia.org/wiki/Photocarcinogen |
In chemistry , photocatalysis is the acceleration of a photoreaction in the presence of a photocatalyst , the excited state of which "repeatedly interacts with the reaction partners forming reaction intermediates and regenerates itself after each cycle of such interactions." [ 1 ] In many cases, the catalyst is a solid that upon irradiation with UV- or visible light generates electron–hole pairs that generate free radicals . Photocatalysts belong to three main groups; heterogeneous , homogeneous , and plasmonic antenna-reactor catalysts. [ 2 ] The use of each catalysts depends on the preferred application and required catalysis reaction.
The earliest mention came in 1911, when German chemist Dr. Alexander Eibner integrated the concept in his research of the illumination of zinc oxide (ZnO) on the bleaching of the dark blue pigment, Prussian blue. [ 3 ] [ 4 ] Around this time, Bruner and Kozak published an article discussing the deterioration of oxalic acid in the presence of uranyl salts under illumination, [ 4 ] [ 5 ] while in 1913, Landau published an article explaining the phenomenon of photocatalysis. Their contributions led to the development of actinometric measurements , measurements that provide the basis of determining photon flux in photochemical reactions. [ 4 ] [ 6 ] After a hiatus, in 1921, Baly et al. used ferric hydroxides and colloidal uranium salts as catalysts for the creation of formaldehyde under visible light. [ 4 ] [ 7 ]
In 1938 Doodeve and Kitchener discovered that TiO 2 , a highly-stable and non-toxic oxide, in the presence of oxygen could act as a photosensitizer for bleaching dyes, as ultraviolet light absorbed by TiO 2 led to the production of active oxygen species on its surface, resulting in the blotching of organic chemicals via photooxidation . This was the first observation of the fundamental characteristics of heterogeneous photocatalysis. [ 4 ] [ 8 ]
Research in photocatalysis again paused until 1964, when V.N. Filimonov investigated isopropanol photooxidation from ZnO and TiO 2 ; [ 4 ] [ 9 ] while in 1965 Kato and Mashio, Doerffler and Hauffe, and Ikekawa et al. (1965) explored oxidation/photooxidation of CO 2 and organic solvents from ZnO radiance. [ 4 ] [ 10 ] [ 11 ] [ 12 ] In 1970, Formenti et al. and Tanaka and Blyholde observed the oxidation of various alkenes and the photocatalytic decay of N 2 O, respectively. [ 4 ] [ 13 ] [ 14 ]
A breakthrough occurred in 1972, when Akira Fujishima and Kenichi Honda discovered that electrochemical photolysis of water occurred when a TiO 2 electrode irradiated with ultraviolet light was electrically connected to a platinum electrode. As the ultraviolet light was absorbed by the TiO 2 electrode, electrons flowed from the anode to the platinum cathode where hydrogen gas was produced. This was one of the first instances of hydrogen production from a clean and cost-effective source, as the majority of hydrogen production comes from natural gas reforming and gasification . [ 4 ] [ 15 ] Fujishima's and Honda's findings led to other advances. In 1977, Nozik discovered that the incorporation of a noble metal in the electrochemical photolysis process, such as platinum and gold , among others, could increase photoactivity, and that an external potential was not required. [ 4 ] [ 16 ] Wagner and Somorjai (1980) and Sakata and Kawai (1981) delineated hydrogen production on the surface of strontium titanate (SrTiO 3 ) via photogeneration, and the generation of hydrogen and methane from the illumination of TiO 2 and PtO 2 in ethanol , respectively. [ 4 ] [ 17 ] [ 18 ]
For many decades photocatalysis had not been developed for commercial purposes. However, in 2023 multiple patents were granted to a U.S. company, (Pure-Light Technologies, Inc.) that has developed various formulas and processes that allow for widespread commercialization for VOC reduction and germicidal action. [ 19 ] Chu et al. (2017) assessed the future of electrochemical photolysis of water, discussing its major challenge of developing a cost-effective, energy-efficient photoelectrochemical (PEC) tandem cell, which would, “mimic natural photosynthesis". [ 4 ] [ 20 ]
In heterogeneous catalysis the catalyst is in a different phase from the reactants. Heterogeneous photocatalysis is a discipline which includes a large variety of reactions: mild or total oxidations , dehydrogenation , hydrogen transfer, 18 O 2 – 16 O 2 and deuterium-alkane isotopic exchange, metal deposition, water detoxification, and gaseous pollutant removal.
Most heterogeneous photocatalysts are transition metal oxides and semiconductors . Unlike metals, which have a continuum of electronic states, semiconductors possess a void energy region where no energy levels are available to promote recombination of an electron and hole produced by photoactivation in the solid. The difference in energy between the filled valence band and the empty conduction band in the MO diagram of a semiconductor is the band gap . [ 21 ] When the semiconductor absorbs a photon with energy equal to or greater than the material's band gap , an electron excites from the valence band to the conduction band, generating an electron hole in the valence band. This electron-hole pair is an exciton . [ 21 ] The excited electron and hole can recombine and release the energy gained from the excitation of the electron as heat. Such exciton recombination is undesirable and higher levels cost efficiency. [ 22 ] Efforts to develop functional photocatalysts often emphasize extending exciton lifetime, improving electron-hole separation using diverse approaches that may rely on structural features such as phase hetero-junctions (e.g. anatase - rutile interfaces), noble-metal nanoparticles , silicon nanowires and substitutional cation doping. [ 23 ] The ultimate goal of photocatalyst design is to facilitate reactions of the excited electrons with oxidants to produce reduced products, and/or reactions of the generated holes with reductants to produce oxidized products. Due to the generation of positive holes (h + ) and excited electrons (e − ), oxidation-reduction reactions take place at the surface of semiconductors irradiated with light.
In one mechanism of the oxidative reaction, holes react with the moisture present on the surface and produce a hydroxyl radical. The reaction starts by photo-induced exciton generation in the metal oxide (MO) surface by photon (hv) absorption :
Oxidative reactions due to photocatalytic effect:
Reductive reactions due to photocatalytic effect:
Ultimately, both reactions generate hydroxyl radicals. These radicals are oxidative in nature and nonselective with a redox potential of E 0 = +3.06 V. [ 24 ] This is significantly greater than many common organic compounds, which typically are not greater than E 0 = +2.00 V. [ 25 ] This results in the non-selective oxidative behavior of these radicals.
TiO 2 , a wide band-gap semiconductor , is a common choice for heterogeneous catalysis. Inertness to chemical environment and long-term photostability has made TiO 2 an important material in many practical applications. Investigation of TiO 2 in the rutile (bandgap 3.0 eV) and anatase (bandgap 3.2 eV) phases is common. [ 22 ] The absorption of photons with energy equal to or greater than the band gap of the semiconductor initiates photocatalytic reactions. This produces electron-hole (e − /h + ) pairs: [ 22 ]
Where the electron is in the conduction band and the hole is in the valence band . The irradiated TiO 2 particle can behave as an electron donor or acceptor for molecules in contact with the semiconductor. It can participate in redox reactions with adsorbed species, as the valence band hole is strongly oxidizing while the conduction band electron is strongly reducing. [ 22 ]
In homogeneous photocatalysis, the reactants and the photocatalysts exist in the same phase . The process by which the atmosphere self-cleans and removes large organic compounds is a gas phase homogenous photocatalysis reaction. [ 26 ] The ozone process is often referenced when developing many photocatalysts:
Most homogeneous photocatalytic reactions are aqueous phase, with a transition-metal complex photocatalyst. The wide use of transition-metal complexes as photocatalysts is in large part due to the large band gap and high stability of the species. [ 27 ] Homogeneous photocatalysts are common in the production of clean hydrogen fuel production, with the notable use of cobalt and iron complexes . [ 27 ]
Iron complex hydroxy-radical formation using the ozone process is common in the production of hydrogen fuel (similar to Fenton's reagent process done in low pH conditions without photoexcitation ): [ 27 ]
Complex-based photocatalysts are semiconductors, and operate under the same electronic properties as heterogeneous catalysts. [ 28 ]
A plasmonic antenna-reactor photocatalyst is a photocatalyst that combines a catalyst with attached antenna that increases the catalyst's ability to absorb light, thereby increasing its efficiency.
A SiO 2 catalyst combined with an Au light absorber accelerated hydrogen sulfide -to-hydrogen reactions. The process is an alternative to the conventional Claus process that operates at 800–1,000 °C (1,470–1,830 °F). [ 29 ]
A Fe catalyst combined with a Cu light absorber can produce hydrogen from ammonia ( NH 3 ) at ambient temperature using visible light. Conventional Cu-Ru production operates at 650–1,000 °C (1,202–1,832 °F). [ 30 ]
Photoactive catalysts have been introduced over the last decade, such as TiO 2 and ZnO nanorods. Most suffer from the fact that they can only perform under UV irradiation due to their band structure. Other photocatalysts, including a graphene-ZnO nanocompound counter this problem. [ 32 ] For several decades, there have been numerous attempts to develop active photocatalysts with broad light absorption capabilities. High-entropy photocatalysts, first introduced in 2020, [ 33 ] are the result of one such effort. They have been utilized for hydrogen production, oxygen production, carbon dioxide conversion, and plastic waste conversion. [ 34 ]
Micro-sized ZnO tetrapodal particles added to pilot paper production . [ 31 ] The most common are one-dimensional nanostructures, such as nanorods , nanotubes , nanofibers, nanowires, but also nanoplates, nanosheets, nanospheres, tetrapods. ZnO is strongly oxidative, chemically stable, with enhanced photocatalytic activity, and has a large free-exciton binding energy . It is non-toxic, abundant, biocompatible , biodegradable, environmentally friendly, low cost, and compatible with simple chemical synthesis. ZnO faces limits to its widespread use in photocatalysis under solar radiation. Several approaches have been suggested to overcome this limitation, including doping for reducing the band gap and improving charge carrier separation. [ 35 ]
Photocatalytic water splitting separates water into hydrogen and oxygen: [ 36 ]
The most prevalently investigated material, TiO 2 , is inefficient. Mixtures of TiO 2 and nickel oxide (NiO) are more active. NiO allows a significant explоitation of the visible spectrum. [ 37 ] One efficient photocatalyst in the UV range is based on sodium tantalite (NaTaO 3 ) doped with lanthanum and loaded with a nickel oxide cocatalyst . The surface is grooved with nanosteps from doping with lanthanum (3–15 nm range, see nanotechnology ). The NiO particles are present on the edges, with the oxygen evolving from the grooves.
Titanium dioxide takes part in self-cleaning glass . Free radicals [ 38 ] [ 39 ] generated from TiO 2 oxidize organic matter . [ 40 ] [ 41 ] The rough wedge-like TiO 2 surface can be modified with a hydrophobic monolayer of octadecylphosphonic acid (ODP). TiO 2 surfaces that were plasma etched for 10 seconds and subsequent surface modifications with ODP showed a water contact angle greater than 150◦. The surface was converted into a superhydrophilic surface (water contact angle = 0◦) upon UV illumination, due to rapid decomposition of octadecylphosphonic acid coating resulting from TiO 2 photocatalysis. Due to TiO 2 's wide band gap, light absorption by the semiconductor material and resulting superhydrophilic conversion of undoped TiO 2 requires ultraviolet radiation (wavelength <390 nm) and thereby restricts self-cleaning to outdoor applications. [ 42 ]
TiO 2 conversion of CO 2 into gaseous hydrocarbons. [ 49 ] The proposed reaction mechanisms involve the creation of a highly reactive carbon radical from carbon monoxide and carbon dioxide which then reacts with photogenerated protons to ultimately form methane . Efficiencies of TiO 2 -based photocatalysts are low, although nanostructures such as carbon nanotubes [ 50 ] and metallic nanoparticles [ 51 ] help.
ePaint is a less-toxic alternative to conventional antifouling marine paints that generates hydrogen peroxide.
Photocatalysis of organic reactions by polypyridyl complexes, [ 52 ] porphyrins, [ 53 ] or other dyes [ 54 ] can produce materials inaccessible by classical approaches. Most photocatalytic dye degradation studies have employed TiO 2 . The anatase form of TiO 2 has higher photon absorption characteristics. [ 55 ]
Photocatalyst radical generation species allow for the degradation of organic pollutants into non-toxic compounds at a high efficiency. Use of CuO nanosheets to breakdown azo bonds in food dyes is one such example, with 96.99% degradation after only 6 minutes. [ 56 ] Degradation of organic matter is a highly applicable property, particularly in waste processing.
The use of photocatalyst TiO 2 as a support system for filtration membranes shows promise in improving membrane bioreactors in the treatment of wastewater. [ 57 ] Polymer-based membranes have shown reduced fouling and self-cleaning properties in both blended and coated TiO 2 membranes. Photocatalyst-coated membranes show the most promise, as the increased surface exposure of the photocatalyst increases its organic degradation activity. [ 58 ]
Photocatalysts are also highly effective reducers of toxic heavy metals like hexavalent chromium from water systems. Under visible light the reduction of Cr(VI) by a Ce-ZrO 2 sol-gel on a silicon carbide was 97% effective at reducing the heavy metal to trivalent chromium . [ 59 ]
Light2CAT was a project funded by the European Commission from 2012 to 2015. It aimed to develop a modified TiO 2 that can absorb visible light and include this modified TiO 2 into construction concrete. The TiO 2 degrades harmful pollutants such as NOx into NO 3 − . The modified TiO 2 is in use in Copenhagen and Holbæk, Denmark, and Valencia, Spain. This “self-cleaning” concrete led to a 5-20% reduction in NOx over the course of a year. [ 60 ] [ 61 ]
ISO 22197-1:2007 specifies a test method for the measurement of NO 2 removal for materials that contain a photocatalyst or have superficial photocatalytic films. [ 62 ]
Specific FTIR systems are used to characterize photocatalytic activity or passivity, especially with respect to volatile organic compounds , and representative binder matrices. [ 63 ]
Mass spectrometry allows measurement of photocatalytic activity by tracking the decomposition of gaseous pollutants such as nitrogen NOx or CO 2 . [ 64 ] | https://en.wikipedia.org/wiki/Photocatalysis |
Photocatalyst activity indicator ink ( paii ) is a substance used to identify the presence of an underlying heterogeneous photocatalyst and to measure its activity. Such inks render visible the activity of photocatalytic coatings applied to various "self-cleaning" products. The inks contain a dyestuff that reacts to ultraviolet radiation in the presence of the photocatalytic agent in the coating. They are applied to the coated product (usually by a pen, brush, or drawdown bar) and show a color change or disappearance when exposed to ultraviolet radiation. The use of a paii based on the dye resazurin forms the basis of an ISO standard test for photocatalytic activity. [ 1 ]
A photocatalyst activity indicator ink quickly and easily identifies the presence of an underlying heterogeneous photocatalyst and provides a measure of its activity. A heterogeneous photocatalyst is a material that uses absorbed light energy (usually UV ) to drive desired reactions that would not otherwise proceed under ambient conditions. Commercial photocatalytic products, which include: architectural glass, [ 2 ] [ 3 ] [ 4 ] ceramic tiles, [ 5 ] [ 6 ] roof tiles, [ 7 ] cement, [ 8 ] [ 9 ] paint, [ 10 ] [ 11 ] and fabrics [ 12 ] [ 13 ] [ 14 ] are marketed on their ability to clean their own surfaces (i.e. are self-cleaning ) and the ambient air. Paiis address the industry need for a rapid, simple, inexpensive method to demonstrate and assess the activities of the usually thin, invisible to the eye, photocatalytic coatings present on self-cleaning products. A paii, coated onto the surface of a photocatalyst material under test, works via a photoreductive mechanism, in which light absorbed by the photocatalyst drives the reduction of the dye in the paii, thereby producing a striking color change, [ 15 ] [ 16 ] which can be measured through the use of a simple mobile phone camera + app, in lieu of any sophisticated analytical equipment. [ 17 ] [ 18 ] Uses of paiis include: (i) laboratory, factory and on-site commercial photocatalyst product quality control (ii) marketing, for the rapid and striking demonstration of the efficacy of the usually invisible and otherwise slow-acting photocatalyst coating, (iii) counterfeit detection and (iv) evaluating new photocatalytic materials. The development and applications for such paiis have been reviewed in detail. [ 19 ]
Heterogeneous photocatalysis is the process that underpins the activity of most architectural materials, such as glass, [ 2 ] [ 3 ] [ 4 ] ceramic tiles, [ 5 ] [ 6 ] roof tiles, [ 7 ] concrete, [ 8 ] [ 9 ] paint, [ 10 ] [ 11 ] and fabrics [ 12 ] [ 13 ] [ 14 ] which are promoted as being 'self-cleaning' (or 'air-purifying'). These photocatalytic materials facilitate the oxidative mineralisation of organic and inorganic species by ambient oxygen on their surfaces, rendering the surfaces clean and, usually, hydrophilic . In most commercial photocatalytic products the active layer is a thin, clear, colourless coating of the semiconductor anatase titania , which requires UV light to photogenerate the necessary electrons (e − ) and holes (h + ), in its conductance and valence bands, respectively, to promote the photocatalytic process. [ 20 ] A schematic of the key processes behind the photocatalytic mineralisation of an organic pollutant on the surface of a titania photocatalyst film is illustrated in figure 1 and the overall reaction is summarised by:
Water molecules—adsorbed to the photocatalyst—are also needed to generate the hydroxyl groups on the surface. [ 21 ]
The marketing of photocatalytic products and prevention of counterfeiting is made difficult because the photocatalytic coatings are usually and necessarily invisible to the eye. [ 20 ] One way to achieve a visual demonstration of photocatalysis is to use a dyestuff, like methylene blue , dissolved in water, as the organic species to be mineralised, since, as the photocatalytic process proceeds, the colour of the dye disappears as it is oxidised. [ 22 ] This approach forms the basis of a well-established ISO test for photocatalytic activity of films ISO. [ 23 ] However, most photocatalyst commercial products use only a thin layer of titania (e.g. ca. 15 nm thick in self-cleaning glass) [ 24 ] and ambient UV levels are often low (e.g. for a sunny day in the UK the UVA irradiance is only ca. 4 mW/cm 2 ). As a consequence, the photocatalytic oxidative bleaching of methylene blue is usually very slow, taking many hours, [ 23 ] and so inappropriate for marketing at least.
Photocatalyst activity indicator inks are a recent advance in the visual demonstration of photocatalysis and the assessment of the activity of photocatalyst materials. [ 15 ] [ 25 ] They are inexpensive, easy to use and provide a very quick route to demonstrating the presence of a photocatalytic film, even under low levels of UV light. Unlike the photo-oxidative bleaching of methylene blue, [ 22 ] they use the underlying semiconductor photocatalyst film to photoreduce the dye (D ox in figure 2), in the ink coating, to another (usually colourless) form, (D red in figure 2) whilst simultaneously oxidising an easily oxidised organic species, a sacrificial electron donor (SED), such as glycerol, which is also present in the ink. [ 15 ] [ 25 ] [ 26 ] The kinetics of reduction of the dye in a paii have been studied in great detail. [ 27 ] [ 28 ] Figure 2 illustrates the basic principles of operation of a paii when applied to a product that has a thin photocatalyst film coating.
The ink is applied to the photocatalyst coating, usually using either a felt-tipped pen, air-brush, rubber stamp, paint brush, or a drawdown bar, and then exposed it to sunlight or an alternative, appropriate light source. The ink identifies the presence of the photocatalyst coating by changing colour upon irradiation of the latter at a rate (usually < 10 min [ 15 ] ) which provides a measure of the film's activity.
For example, it has been established that the rate of change in colour of an paii on commercial self-cleaning glass is directly related to the rate at which the glass is also able to photo-oxidatively mineralise, via reaction (1) the wax-like, natural fatty acid, stearic acid , [ 15 ] [ 29 ] [ 30 ] [ 31 ] found in finger prints. [ 32 ] The rate of the rapid colour change associated with photocatalyst activity indicator inks has also been directly correlated with the photocatalytic oxidation of methylene blue [ 33 ] [ 34 ] and NO x . [ 35 ] [ 36 ] It has also been shown that digital colour analysis of photographs monitoring the colour change of a paii can be used to extract apparent absorbance data which correlates well with UV-vis absorption data for the same sample, without the need for expensive spectrophotometric instrumentation. [ 18 ]
By making the dyes in the ink increasing difficult to reduce chemically, for example by using: basic blue 66, [ 37 ] resazurin, [ 15 ] and acid violet 7, [ 35 ] respectively, it is possible to make paiis which are effective on photocatalyst coatings which exhibit, respectively: low (most self-cleaning tiles), moderate (self-cleaning glass) or high (self-cleaning paints) activities. paiis based on the dyes 2,6-dichloroindophenol (DCIP) [ 38 ] and methylene blue [ 28 ] have also been reported.
Paiis can be used as quality control and marketing tools in commerce and as a quick and easy way to assess and/or map the activities of new photocatalytic materials in research. [ 15 ] [ 25 ] [ 39 ] [ 26 ] [ 16 ] In addition, it has also been demonstrated that such inks can be used on highly coloured and black surfaces, provided the oxidised and/or reduced form of the redox dye is luminescent, [ 40 ] and that they can be effectively used to demonstrate the activity of visible light photocatalysts. [ 41 ] [ 42 ] In light of the need for in situ testing of commercial photocatalyst materials, paii labels have been developed that can be applied simply in the field on any surface to be tested, in both a non-reusable [ 43 ] and reusable [ 44 ] form. One noteworthy application of paiis is where a uniform film has been applied to a photocatalytic surface, and the variation in the rate of colour change across the surface has been monitored and used to generate a surface map of the photoactivity. [ 26 ] [ 45 ] By this method the uniformity of the surface activity may be investigated, and any "hotspots" of photoactivity identified. [ 45 ] By varying the composition of a semiconductor photocatalyst surface across the surface itself, a paii photoactivity surface map may be used to determine the optimal composition which yields the greatest photocatalytic response. [ 26 ] [ 46 ]
The rapid colour change of paiis makes them suitable for such applications as: | https://en.wikipedia.org/wiki/Photocatalyst_activity_indicator_ink |
Photocatalytic water splitting is a process that uses photocatalysis for the dissociation of water (H 2 O) into hydrogen ( H 2 ) and oxygen ( O 2 ). The inputs are light energy ( photons ), water, and a catalyst (s). The process is inspired by Photosynthesis , which converts water and carbon dioxide into oxygen and carbohydrates. Water splitting using solar radiation has not been commercialized. [ 1 ] Photocatalytic water splitting is done by dispersing photocatalyst particles in water or depositing them on a substrate, unlike Photoelectrochemical cell , which are assembled into a cell with a photoelectrode. [ 2 ] Hydrogen fuel production using water and light (photocatalytic water splitting ), instead of petroleum, is an important renewable energy strategy.
Two mole of H 2 O is split into 1 mole O 2 and 2 mole H 2 using light in the process shown below.
A photon with an energy greater than 1.23 eV is needed to generate an electron–hole pairs , which react with water on the surface of the photocatalyst. The photocatalyst must have a bandgap large enough to split water; in practice, losses from material internal resistance and the overpotential of the water splitting reaction increase the required bandgap energy to 1.6–2.4 eV to drive water splitting. [ 2 ]
The process of water-splitting is a highly endothermic process (Δ H > 0). Water splitting occurs naturally in photosynthesis when the energy of four photons is absorbed and converted into chemical energy through a complex biochemical pathway (Dolai's or Kok's S-state diagrams ). [ 3 ]
O–H bond homolysis in water requires energy of 6.5 - 6.9 eV (UV photon). [ 4 ] [ 5 ] Infrared light has sufficient energy to mediate water splitting because it technically has enough energy for the net reaction. However, it does not have enough energy to mediate the elementary reactions leading to the various intermediates involved in water splitting (this is why there is still water on Earth). Nature overcomes this challenge by absorbing four visible photons. In the laboratory, this challenge is typically overcome by coupling the hydrogen production reaction with a sacrificial reductant other than water. [ 6 ]
Materials used in photocatalytic water splitting fulfill the band requirements and typically have dopants and/or co-catalysts added to optimize their performance. A sample semiconductor with the proper band structure is titanium dioxide ( TiO 2 ) and is typically used with a co-catalyst such as platinum (Pt) to increase the rate of H 2 production. [ 7 ] A major problem in photocatalytic water splitting is photocatalyst decomposition and corrosion. [ 7 ]
Photocatalysts must conform to several key principles in order to be considered effective at water splitting. A key principle is that H 2 and O 2 evolution should occur in a stoichiometric 2:1 ratio; significant deviation could be due to a flaw in the experimental setup and/or a side reaction, neither of which indicate a reliable photocatalyst for water splitting. The prime measure of photocatalyst effectiveness is quantum yield (QY), which is:
To assist in comparison, the rate of gas evolution can also be used. A photocatalyst that has a high quantum yield and gives a high rate of gas evolution is a better catalyst.
The other important factor for a photocatalyst is the range of light that is effective for operation. For example, a photocatalyst is more desirable to use visible photons than UV photons.
The efficiency of solar-to-hydrogen (STH) of photocatalytic water splitting, however, has remained very low.
A STH efficiency of 9.2% indium. [ 8 ]
NaTaO 3 :La yielded the highest water splitting rate of photocatalysts without using sacrificial reagents. [ 7 ] This ultraviolet -based photocatalyst was reported to show water splitting rates of 9.7 mmol/h and a quantum yield of 56%. The nanostep structure of the material promotes water splitting as edges functioned as H 2 production sites and the grooves functioned as O 2 production sites. Addition of NiO particles as co-catalysts assisted in H 2 production; this step used an impregnation method with an aqueous solution of Ni(NO 3 ) 2 •6 H 2 O and evaporated the solution in the presence of the photocatalyst. NaTaO 3 has a conduction band higher than that of NiO , so photo-generated electrons are more easily transferred to the conduction band of NiO for H 2 evolution. [ 9 ]
K 3 Ta 3 B 2 O 12 is another catalyst solely activated by UV and above light. It does not have the performance or quantum yield of NaTaO 3 :La. However, it can split water without the assistance of co-catalysts and gives a quantum yield of 6.5%, along with a water splitting rate of 1.21 mmol/h. This ability is due to the pillared structure of the photocatalyst, which involves TaO 6 pillars connected by BO 3 triangle units. Loading with NiO did not assist the photocatalyst due to the highly active H 2 evolution sites. [ 10 ]
( Ga .82 Zn .18 )( N .82 O .18 ) had the highest quantum yield in visible light for visible light-based photocatalysts that do not utilize sacrificial reagents as of October 2008. [ 7 ] The photocatalyst featured a quantum yield of 5.9% and a water splitting rate of 0.4 mmol/h. Tuning the catalyst was done by increasing calcination temperatures for the final step in synthesizing the catalyst. Temperatures up to 600 °C helped to reduce the number of defects, while temperatures above 700 °C destroyed the local structure around zinc atoms and were thus undesirable. The treatment ultimately reduced the amount of surface Zn and O defects, which normally function as recombination sites, thus limiting photocatalytic activity. The catalyst was then loaded with Rh 2-y Cr y O 3 at a rate of 2.5 wt% Rh and 2 wt% Cr for better performance. [ 11 ]
Proton reduction catalysts based on earth-abundant elements [ 12 ] [ 13 ] carry out one side of the water-splitting half-reaction.
A mole of octahedral nickel(II) complex, [Ni(bztpen)] 2+ (bztpen = N-benzyl-N,N’,N’-tris(pyridine-2-ylmethyl)ethylenediamine) produced 308,000 moles of hydrogen over 60 hours of electrolysis with an applied potential of -1.25 V vs. standard hydrogen electrode . [ 14 ]
Ru(II) with three 2,2'-bipyridine ligands is a common compound for photosensitization used for photocatalytic oxidative transformations like water splitting. However, the bipyridine degrades due to the strongly oxidative conditions which causes the concentration of Ru(bpy)32+ to diminish. Measurements of the degradation is difficult with UV-Vis spectroscopy but MALDI MS can be used instead. [ 15 ]
Cobalt -based photocatalysts have been reported, [ 16 ] including tris( bipyridine ) cobalt(II), compounds of cobalt ligated to certain cyclic polyamines , and some cobaloximes .
In 2014 researchers announced an approach that connected a chromophore to part of a larger organic ring that surrounded a cobalt atom. The process is less efficient than a platinum catalyst although cobalt is less expensive, potentially reducing costs. The process uses one of two supramolecular assemblies based on Co(II)-templated coordination of Ru(bpy) + 32 (bpy = 2,2′-bipyridyl) analogues as photosensitizers and electron donors to a cobaloxime macrocycle . The Co(II) centers of both assemblies are high spin, in contrast to most previously described cobaloximes. Transient absorption optical spectroscopies indicate that charge recombination occurs through multiple ligand states within the photosensitizer modules. [ 17 ] [ 18 ]
Bismuth vanadate is a visible-light-driven photocatalyst with a bandgap of 2.4 eV. [ 19 ] [ 20 ] BV have demonstrated efficiencies of 5.2% for flat thin films [ 21 ] [ 22 ] and 8.2% for core-shell WO 3 @BiVO 4 nanorods with thin absorbers. [ 23 ] [ 24 ] [ 25 ]
Bismuth oxides are characterized by visible light absorption properties, just like vanadates . [ 26 ] [ 27 ]
Tungsten diselenide has photocatalytic properties that might be a key to more efficient electrolysis. [ 28 ]
Systems based on III-V semiconductors , such as InGaP , enable solar-to-hydrogen efficiencies of up to 14%. [ 29 ] Challenges include long-term stability and cost.
2-dimensional semiconductors such as MoS 2 are actively researched as potential photocatalysts. [ 30 ] [ 31 ]
An aluminum‐based metal-organic framework made from 2‐aminoterephthalate can be modified by incorporating Ni 2+ cations into the pores through coordination with the amino groups. [ 32 ] Molybdenum disulfide
Organic semiconductor photocatalysts, in particular porous organic polymers (POPs), attracted attention due to their low cost, low toxicity, and tunable light absorption vs inorganic counterparts. [ 33 ] [ 34 ] [ 35 ] They display high porosity , low density, diverse composition, facile functionalization, high chemical/thermal stability, as well as high surface areas. [ 36 ] Efficient conversion of hydrophobic polymers into hydrophilic polymer nano-dots (Pdots) increased polymer-water interfacial contact, which significantly improved performance. [ 37 ] [ 38 ] [ 39 ]
Beweries, et al., developed a light-driven "closed cycle of water splitting using ansa-titanocene(III/IV) triflate complexes". [ 40 ]
An Indium gallium nitride ( In x Ga 1- x N ) photocatalyst achieved a solar-to-hydrogen efficiency of 9.2% from pure water and concentrated sunlight. The effiency is due to the synergistic effects of promoting hydrogen–oxygen evolution and inhibiting recombination by operating at an optimal reaction temperature (~70 degrees C), powered by harvesting previously wasted infrared light . An STH efficiency of about 7% was realized from tap water and seawater and efficiency of 6.2% in a larger-scale system with a solar light capacity of 257 watts. [ 41 ]
Solid solutions Cd 1- x Zn x S with different Zn concentration (0.2 < x < 0.35) have been investigated in the production of hydrogen from aqueous solutions containing SO 3 2 − / S 2 − {\displaystyle {\ce {SO3^2-}}/{\ce {S^2-}}} as sacrificial reagents under visible light. [ 42 ] Textural, structural and surface catalyst properties were determined by N 2 adsorption isotherms, UV–vis spectroscopy, SEM and XRD and related to the activity results in hydrogen production from water splitting under visible light. It was reported that the crystallinity and energy band structure of the Cd 1- x Zn x S solid solutions depend on their Zn atomic concentration. The hydrogen production rate increased gradually as Zn concentration on photocatalysts increased from 0.2 to 0.3. The subsequent increase in the Zn fraction up to 0.35 reduced production. Variation in photoactivity was analyzed for changes in crystallinity, level of the conduction band and light absorption ability of Cd 1- x Zn x S solid solutions derived from their Zn atomic concentration. | https://en.wikipedia.org/wiki/Photocatalytic_water_splitting |
A photocathode is a surface engineered to convert light ( photons ) into electrons using the photoelectric effect . Photocathodes are important in accelerator physics where they are utilised in a photoinjector to generate high brightness electron beams. Electron beams generated with photocathodes are commonly used for free electron lasers and for ultrafast electron diffraction . Photocathodes are also commonly used as the negatively charged electrode in a light detection device such as a photomultiplier , phototube and image intensifier .
Quantum efficiency is a unitless number that measures the sensitivity of the photocathode to light. It is the ratio of the number of electrons emitted to the number of incident photons. [ 1 ] This property depends on the wavelength of light being used to illuminate the photocathode. For many applications, QE is the most important property as the photocathodes are used solely for converting photons into an electrical signal.
Quantum efficiency may be calculated from photocurrent ( I {\displaystyle I} ), laser power ( P laser {\displaystyle P_{\text{laser}}} ), and either the photon energy ( E photon {\displaystyle E_{\text{photon}}} ) or laser wavelength ( λ laser {\displaystyle \lambda _{\text{laser}}} ) using the following equation. [ 1 ] [ 2 ]
QE = N electron N photon = I ⋅ E photon P laser ⋅ e ≈ I [ amps ] ⋅ 1240 P laser [ watts ] ⋅ λ laser [ nm ] {\displaystyle {\text{QE}}={\frac {N_{\text{electron}}}{N_{\text{photon}}}}={\frac {I\cdot E_{\text{photon}}}{P_{\text{laser}}\cdot e}}\approx {\frac {{\overset {[{\text{amps}}]}{I}}\cdot 1240}{{\underset {[{\text{watts}}]}{P_{\text{laser}}}}\cdot {\underset {[{\text{nm}}]}{\lambda _{\text{laser}}}}}}}
For some applications, the initial momentum distribution of emitted electrons is important and the mean transverse energy (MTE) and thermal emittance are popular metrics for this. The MTE is the variance of the transverse momentum in a direction along the photocathode's surface and is most commonly reported in units of milli-electron volts. [ 3 ]
MTE = ⟨ p ⊥ 2 ⟩ 2 m e {\displaystyle {\text{MTE}}={\frac {\langle p_{\perp }^{2}\rangle }{2m_{e}}}}
In high brightness photoinjectors, the MTE helps to determine the initial emittance of the beam which is the area in phase space occupied by the electrons. [ 4 ] The emittance ( ε {\displaystyle \varepsilon } ) can be calculated from MTE and the laser spot size on the photocathode ( σ x {\displaystyle \sigma _{x}} ) using the following equation.
ε = σ x MTE m e c 2 {\displaystyle \varepsilon =\sigma _{x}{\sqrt {\frac {\text{MTE}}{m_{e}c^{2}}}}}
where m e c 2 {\displaystyle m_{e}c^{2}} is the rest mass of an electron. In commonly used units, this is as follows.
ε [ μ m ] ≈ σ x [ μ m ] MTE [ meV ] 511 × 10 6 {\displaystyle {\overset {[\mu {\text{m}}]}{\varepsilon }}\approx {\overset {[\mu {\text{m}}]}{\sigma _{x}}}{\sqrt {\frac {\overset {[{\text{meV}}]}{\text{MTE}}}{511\times 10^{6}}}}}
Because of the scaling of transverse emittance with MTE, it is sometimes useful to write the equation in terms of a new quantity called the thermal emittance. [ 5 ] The thermal emittance is derived from MTE using the following equation.
ε th = MTE m e c 2 {\displaystyle \varepsilon _{\text{th}}={\sqrt {\frac {\text{MTE}}{m_{e}c^{2}}}}}
It is most often expressed in the ratio um/mm to express the growth of emittance in units of um as the laser spot grows (measured in units of mm).
An equivalent definition of MTE is the temperature of electrons emitted in vacuum. [ 6 ] The MTE of electrons emitted from commonly used photocathodes, such as polycrystalline metals, is limited by the excess energy (the difference between the energy of the incident photons and the photocathode's work function) provided to the electrons. To limit MTE, photocathodes are often operated near the photoemission threshold, where the excess energy tends to zero. In this limit, the majority of photoemission comes from the tail of the Fermi distribution. Therefore, MTE is thermally limited to k B T {\displaystyle k_{B}T} , where k B {\displaystyle k_{B}} is the Boltzmann constant and T {\displaystyle T} is the temperature of electrons in the solid. [ 7 ]
Due to conservation of transverse momentum and energy in the photoemission process, the MTE of a clean, atomically-ordered, single crystalline photocathode is determined by the material's band structure. An ideal band structure for low MTEs is one that does not allow photoemission from large transverse momentum states. [ 8 ]
Outside of accelerator physics, MTE and thermal emittance play a role in the resolution of proximity-focused imaging devices that use photocathodes. [ 9 ] This is important for applications such as image intensifiers, wavelength converters, and the now obsolete image tubes.
Many photocathodes require excellent vacuum conditions to function and will become "poisoned" when exposed to contaminates. Additionally, using the photocathodes in high current applications will slowly damage the compounds as they are exposed to ion back-bombardment. These effects are quantified by the lifetime of the photocathode. Cathode death is modeled as a decaying exponential as a function of either time or emitted charge. Lifetime is then the time constant of the exponential. [ 10 ] [ 11 ]
For many years the photocathode was the only practical method for converting light to an electron current. As such it tends to function as a form of 'electric film' and shared many characteristics of photography. It was therefore the key element in opto-electronic devices, such as TV camera tubes like the orthicon and vidicon, and in image tubes such as intensifiers , converters, and dissectors . Simple phototubes were used for motion detectors and counters.
Phototubes have been used for years in movie projectors to read the sound tracks on the edge of movie film. [ 12 ]
The more recent development of solid state optical devices such as photodiodes has reduced the use of photocathodes to cases where they still remain superior to semiconductor devices.
Photocathodes operate in a vacuum, so their design parallels vacuum tube technology. Since
most cathodes are sensitive to air the construction of photocathodes typically occurs after the enclosure has been evacuated. In operation the photocathode requires an electric field with a nearby positive anode to assure electron emission. Molecular beam epitaxy is broadly applied in today's manufacturing of photocathode. By using a substrate with matched lattice parameters, crystalline photocathodes can be made and electron beams can come out from the same position in lattice's Brillouin zone to get high brightness electron beams.
Photocathodes divide into two broad groups; transmission and reflective. A transmission type is typically a coating upon a glass window in which the light strikes one surface and electrons exit from the opposite surface. A reflective type is typically formed on an opaque metal electrode base, where the light enters and the electrons exit from the same side. A variation is the double reflection type, where the metal base is mirror-like, causing light that passed through the photocathode without causing emission to be bounced back for a second try. This mimics the retina on many mammals.
The effectiveness of a photocathode is commonly expressed as quantum efficiency, that being the ratio of emitted electrons vs. impinging quanta (of light). The efficiency varies with construction as well, as it can be improved with a stronger electric field.
The surface of photocathodes can be characterized by various surface sensitive techniques like scanning tunneling microscopy (STM) and X-ray photoelectron spectroscopy .
Although a plain metallic cathode will exhibit photoelectric properties, the specialized coating greatly increases the effect. A photocathode usually consists of alkali metals with very low work functions .
The coating releases electrons much more readily than the underlying metal, allowing it to detect the low-energy photons in infrared radiation. The lens transmits the radiation from the object being viewed to a layer of coated glass. The photons strike the metal surface and transfer electrons to its rear side. The freed electrons are then collected to produce the final image. | https://en.wikipedia.org/wiki/Photocathode |
The Photochemical Reflectance Index ( PRI ) is a reflectance measurement developed by John Gamon during his tenure as a postdoctorate fellow supervised by Christopher Field at the Carnegie Institution for Science at Stanford University. The PRI is sensitive to changes in carotenoid pigments (e.g. xanthophyll pigments) in live foliage . Carotenoid pigments are indicative of photosynthetic light use efficiency, or the rate of carbon dioxide uptake by foliage per unit energy absorbed. As such, it is used in studies of vegetation productivity and stress. Because the PRI measures plant responses to stress, it can be used to assess general ecosystem health using satellite data or other forms of remote sensing . Applications include vegetation health in evergreen shrublands , forests, and agricultural crops prior to senescence . PRI is defined by the following equation using reflectance (ρ) at 531 and 570 nm wavelength:
Some authors use
The values range from –1 to 1.
This biophysics -related article is a stub . You can help Wikipedia by expanding it .
This remote sensing -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Photochemical_Reflectance_Index |
Photochemical action plots are a scientific tool used to understand the effects of different wavelengths of light on photochemical reactions . The methodology involves exposing a reaction solution to the same number of photons at varying monochromatic wavelengths, monitoring the conversion or reaction yield of starting materials and/or reaction products. Such global high-resolution analysis of wavelength-dependent chemical reactivity has revealed that maxima in absorbance and reactivity often do not align. [ 1 ] Photochemical action plots are historically connected to (biological) action spectra .
The study of biological responses to specific wavelengths dates back to the late 19th century. Research primarily focused on assessing photodamage from solar radiation using broad-band lamps and narrow filters. These studies quantified effects such as cell viability, [ 2 ] production of erythema, [ 3 ] vitamin D3 degradation, [ 4 ] [ 5 ] DNA changes, [ 6 ] [ 7 ] and skin cancer appearance. [ 8 ] The first biological action spectrum was recorded by Engelmann , who used a prism to produce different colors of light and then illuminated cladophora in a bacteria suspension. He discovered the effects of different light wavelengths on photosynthesis , marking the first recorded action spectrum of photosynthesis. [ 9 ]
Critical evaluations of active wavelength regions in these studies helped identify contributing chromophores to processes such as photosynthesis. These chromophores are key for converting solar energy into chemical energy , with their absorption closely matching the rate of photosynthesis, usually determined by oxygen production or carbon fixation. [ 10 ] This correlation led to the discovery of chlorophyll as a key chromophore in plant growth. Such studies have also been instrumental in identifying DNA as the core genetic material, [ 11 ] key wavelengths leading to skin cancer, [ 12 ] the transparent optical window of biological tissue, [ 13 ] and the influence of color on circadian rhythms. [ 14 ]
In the late 20th century, action spectra became essential in developing optical devices for photocatalysis [ 15 ] and photovoltaics , [ 16 ] particularly in measuring photocurrent efficiency at various wavelengths. These studies have been vital in understanding primary contributors to photocurrent generation, [ 17 ] [ 18 ] leading to advancements in materials, [ 19 ] [ 20 ] morphologies, [ 21 ] [ 22 ] and device designs [ 23 ] [ 24 ] for improved solar energy capture and utilization.
In photochemistry, action spectra have been mainly used in photodissociation studies. These involve a monochromatic light source, often a laser, coupled with a mass spectrometer to record wavelength-dependent ion dissociation in gaseous phases. [ 25 ] These spectra help identify contributing chromophores in molecular systems, [ 26 ] [ 27 ] characterize radical generation and unstable isomers , [ 28 ] [ 29 ] and understand higher state electron dynamics. [ 30 ] [ 31 ]
The field underwent a transformation when a team led by Barner-Kowollik and Gescheidt recorded the first modern-day photochemical action plot using a tuneable monochromatic nanosecond pulsed laser system, discovering a strong mismatch between photochemical reactivity and absorptivity and marking a critical advancement in mapping wavelength-dependent conversions in photoinduced polymerizations. [ 32 ] Following this, numerous photochemical action plots have been recorded in various molecular and polymerization systems. [ 33 ] [ 34 ]
Key differences between traditional (biological) action spectra and modern photochemical action plots lie in the precision resolution of wavelengths (monochromaticity) and that an exact number of photons at each wavelength is applied coupled with the fact that covalent bond forming reactions were investigated for the first time. [ 32 ]
In the field of photochemical analysis, it is common to measure the extinction of chemicals with high precision, often at the sub-nanometer scale, using UV/Vis spectroscopy . To understand fundamental relationships between a chemical's absorbance and its photoreactivity, a detailed analysis of the reactivity at a similar level of resolution is required. [ 35 ] Traditional methods using broadly emitting light sources or filters have inherent limitations in resolving true wavelength dependence in photoreactivity. [ 36 ] [ 37 ] [ 38 ] [ 39 ] To record an action plot, a wavelength-tuneable laser system is employed, capable of delivering a stable number of photons at each wavelength. [ 40 ] The photoreactive reaction mixture is divided into aliquots and subjected to monochromatic light independently. The photochemical process' yield or conversion is subsequently measured using sensors like UV-Vis absorption or nuclear magnetic resonance (NMR) frequency changes.
A key finding of modern photochemical action plots [ 32 ] is that the absorption spectrum of a photoreactive molecule or reaction mixture correlates poorly with photochemical reactivity as a function of wavelength in many cases. Initial studies showed a significant red-shift in photopolymerization yield compared to the absorption spectrum of the employed photoinitiators, which showed extremely low absorptivity in those regions. This mismatch between absorption spectra and photochemical action plots has by now been observed in a wide array of photoreactive systems. [ 41 ] [ 42 ] [ 43 ] A prominent example is the photoinduced [2+2] cycloaddition of the stilbene derivative, styrypyrene, which exhibited an 80 nm discrepancy between the action plot and absorption spectrum. [ 33 ] Current research focuses on understanding the reasons behind these frequently observed mismatches.
For photochemical applications, the consequences of the absorptivity/reactivity mismatch are far reaching, as only photochemical action plots can reveal the most effective wavelength for a given process, moving away from the past paradigm that absorption spectra provide guidance for selecting the most effective wavelength. | https://en.wikipedia.org/wiki/Photochemical_action_plots |
A photochemical logic gate is based on the photochemical intersystem crossing and molecular electronic transition between photochemically active molecules, leading to logic gates that can be produced. [ 1 ]
The OR gate is based on the activation of molecule A, and thus pass electron / photon to molecule C's excited state orbitals (C*). The electron from molecule A inter system crosses to C* via the excited state orbitals of B, eventually utilised as a signal in the C* hν c emission. The ' OR ' gate uses two inputs of light (photons) to molecule A in two separate electron transfer chains, both of which are capable of transferring to C* and thus producing the output of an OR gate. Therefore, if either electron transfer chain is activated, molecule C's excitation produces a valid/ output emission.
Excitation A→A* by hν a photon, whereby the promoted electron is passed down to the C* molecular orbital. A second photon applied to the system ( hν c 2 ) causes the excitation of the electron in the C* molecular orbital to the C** molecular orbital -analogous pump probe spectroscopy.
Above, the energy level diagram illustrating the principle of pump probe spectroscopy –the excitation of an excited state.
The AND gate is produced by the necessity of both A→A* and the C**→C excitations occurring at the same time -input hν and hν , are simultaneously required.
To prevent erroneous emissions of light from a single input to the AND gate, it would be necessary to have an electron transfer series with ability accept any electrons (energy) from C* energy level. The electron transfer series would terminate with a low (non-radiative decay) of the energy
The alternatives for producing an AND gate, using molecular photphysics, are two.
(1) The emission produced by the electron drop from C*→C ( hν c ) is not a valid output frequency. The emission from the C** ( hν c + hν c 2 , hν c 3 ) molecular orbital is a valid output signal;. to be used in subsequent logic gates -arranged to respond to the C ∗ ∗ → c 2 C {\displaystyle C^{**}{\xrightarrow[{c2}]{}}C} emission.
The second input of photon (s) to trigger the rapid conversion of a molecule used to complete the electron transfer chain. A very complex molecule like a protein can be engineered to possess high strain energies, so that in the absence of the second light frequency molecule B is inactive (B). The second photon input triggers B→B' where the forward rate constant is much smaller than the reverse. If such a molecule is used as molecule B, the transfer chain can be switched on and off.
To stop the electron transfer chain completing, producing output signals, the input of a photon , hν c2 , is used to produce a 'pump probe spectroscopy' effect by promoting an electron in an electron transfer chain. The fall of the pump probe promoted electron produces an output that is quenched down an electron transfer chain.
An alternative is similar to the AND gate alternative; an input causes a change in molecule structure breaking the electron transfer chain by not allowing the smooth energy transfer of electrons. | https://en.wikipedia.org/wiki/Photochemical_logic_gate |
Photochemical reduction of carbon dioxide harnesses solar energy to convert CO 2 into higher-energy products. Environmental interest in producing artificial systems is motivated by recognition that CO 2 is a greenhouse gas . The process has not been commercialized.
Photochemical reduction involves chemical reduction (redox) generated from the photoexcitation of another molecule, called a photosensitizer . To harness the sun's energy, the photosensitizer must be able to absorb light within the visible and ultraviolet spectrum. [ 1 ] Molecular sensitizers that meet this criterion often include a metal center, as the d-orbital splitting in organometallic species often falls within the energy range of far-UV and visible light. The reduction process begins with excitation of the photosensitizer, as mentioned. This causes the movement of an electron from the metal center into the functional ligands . This movement is termed a metal-to-ligand charge transfer (MLCT). Back-electron transfer from the ligands to the metal after the charge transfer, which yields no net result, is prevented by including an electron-donating species in solution. Successful photosensitizers have a long-lived excited state, usually due to the interconversion from singlet to triplet states, that allow time for electron donors to interact with the metal center. [ 2 ] Common donors in photochemical reduction include triethylamine (TEA), triethanolamine (TEOA), and 1-benzyl-1,4-dihydronicotinamide (BNAH).
After excitation, CO 2 coordinates or otherwise interacts with the inner coordination sphere of the reduced metal. Common products include formic acid , carbon monoxide , and methanol . Note that light absorption and catalytic reduction may occur at the same metal center or on different metal centers. That is, a photosensitizer and catalyst may be tethered through an organic linkage that provides for electronic communication between the species. In this case, the two metal centers form a bimetallic supramolecular complex. And, the excited electron that had resided on the functional ligands of photosensitizer passes through the ancillary ligands to the catalytic center, which becomes a one-electron reduced (OER) species. The advantage of dividing the two processes among different centers is in the ability to tune each center for a particular task, whether through selecting different metals or ligands.
In the 1980s, Lehn observed that Co(I) species were produced in solutions containing CoCl 2 , 2,2'-bipyridine (bpy), a tertiary amine, and a Ru(bpy) 3 Cl 2 photosensitizer. The high affinity of CO 2 to cobalt centers led both him and Ziessel to study cobalt centers as electrocatalysts for reduction. In 1982, they reported CO and H 2 as products from the irradiation of a solution containing 700ml of CO 2 , Ru(bpy) 3 and Co(bpy). [ 4 ]
Since the work of Lehn and Ziessel, several catalysts have been paired with the Ru(bpy) 3 photosensitizer. [ 5 ] [ 6 ] When paired with methylviologen, cobalt, and nickel-based catalysts, carbon monoxide and hydrogen gas are observed as products.
Paired with rhenium catalysts, carbon monoxide is observed as the major product, and with ruthenium catalysts formic acid is observed. Some product selection is attainable through tuning of the reaction environment. Other photosensitizers have also been employed as catalysts. They include FeTPP (TPP=5,10,15,20-tetraphenyl-21H,23H-porphine) and CoTPP, both of which produce CO while the latter produces formate also. Non-metal photocatalysts include pyridine and N-heterocyclic carbenes. [ 7 ] [ 8 ]
In August 2022, it was developed a photocatalyst based on lead – sulfur (Pb–S) bonds, with promising results. [ 10 ] | https://en.wikipedia.org/wiki/Photochemical_reduction_of_carbon_dioxide |
Photochemistry is the branch of chemistry concerned with the chemical effects of light. Generally, this term is used to describe a chemical reaction caused by absorption of ultraviolet ( wavelength from 100 to 400 nm ), visible (400–750 nm), or infrared radiation (750–2500 nm). [ 1 ]
In nature, photochemistry is of immense importance as it is the basis of photosynthesis, vision, and the formation of vitamin D with sunlight. [ 2 ] It is also responsible for the appearance of DNA mutations leading to skin cancers. [ 3 ]
Photochemical reactions proceed differently than temperature-driven reactions. Photochemical paths access high-energy intermediates that cannot be generated thermally, thereby overcoming large activation barriers in a short period of time, and allowing reactions otherwise inaccessible by thermal processes. Photochemistry can also be destructive, as illustrated by the photodegradation of plastics.
Photoexcitation is the first step in a photochemical process where the reactant is elevated to a state of higher energy, an excited state . The first law of photochemistry, known as the Grotthuss–Draper law (for chemists Theodor Grotthuss and John W. Draper ), states that light must be absorbed by a chemical substance in order for a photochemical reaction to take place. According to the second law of photochemistry, known as the Stark–Einstein law (for physicists Johannes Stark and Albert Einstein ), for each photon of light absorbed by a chemical system, no more than one molecule is activated for a photochemical reaction, as defined by the quantum yield . [ 4 ] [ 5 ]
When a molecule or atom in the ground state (S 0 ) absorbs light, one electron is excited to a higher orbital level. This electron maintains its spin according to the spin selection rule; other transitions would violate the law of conservation of angular momentum . The excitation to a higher singlet state can be from HOMO to LUMO or to a higher orbital, so that singlet excitation states S 1 , S 2 , S 3 ... at different energies are possible.
Kasha's rule stipulates that higher singlet states would quickly relax by radiationless decay or internal conversion (IC) to S 1 . Thus, S 1 is usually, but not always, the only relevant singlet excited state. This excited state S 1 can further relax to S 0 by IC, but also by an allowed radiative transition from S 1 to S 0 that emits a photon; this process is called fluorescence .
Alternatively, it is possible for the excited state S 1 to undergo spin inversion and to generate a triplet excited state T 1 having two unpaired electrons with the same spin. This violation of the spin selection rule is possible by intersystem crossing (ISC) of the vibrational and electronic levels of S 1 and T 1 . According to Hund's rule of maximum multiplicity , this T 1 state would be somewhat more stable than S 1 .
This triplet state can relax to the ground state S 0 by radiationless ISC or by a radiation pathway called phosphorescence . This process implies a change of electronic spin, which is forbidden by spin selection rules, making phosphorescence (from T 1 to S 0 ) much slower than fluorescence (from S 1 to S 0 ). Thus, triplet states generally have longer lifetimes than singlet states. These transitions are usually summarized in a state energy diagram or Jablonski diagram , the paradigm of molecular photochemistry.
These excited species, either S 1 or T 1 , have a half-empty low-energy orbital, and are consequently more oxidizing than the ground state. But at the same time, they have an electron in a high-energy orbital, and are thus more reducing . In general, excited species are prone to participate in electron transfer processes. [ 6 ]
Photochemical reactions require a light source that emits wavelengths corresponding to an electronic transition in the reactant. In the early experiments (and in everyday life), sunlight was the light source, although it is polychromatic. [ 7 ] Mercury-vapor lamps are more common in the laboratory. Low-pressure mercury-vapor lamps mainly emit at 254 nm. For polychromatic sources, wavelength ranges can be selected using filters. Alternatively, laser beams are usually monochromatic (although two or more wavelengths can be obtained using nonlinear optics ), and LEDs have a relatively narrow band that can be efficiently used, as well as Rayonet lamps, to get approximately monochromatic beams.
The emitted light must reach the targeted functional group without being blocked by the reactor, medium, or other functional groups present. For many applications, quartz is used for the reactors as well as to contain the lamp. Pyrex absorbs at wavelengths shorter than 275 nm. The solvent is an important experimental parameter. Solvents are potential reactants, and for this reason, chlorinated solvents are avoided because the C–Cl bond can lead to chlorination of the substrate. Strongly-absorbing solvents prevent photons from reaching the substrate. Hydrocarbon solvents absorb only at short wavelengths and are thus preferred for photochemical experiments requiring high-energy photons. Solvents containing unsaturation absorb at longer wavelengths and can usefully filter out short wavelengths. For example, cyclohexane and acetone "cut off" (absorb strongly) at wavelengths shorter than 215 and 330 nm, respectively.
Typically, the wavelength employed to induce a photochemical process is selected based on the absorption spectrum of the reactive species, most often the absorption maximum. Over the last years [ when? ] , however, it has been demonstrated that, in the majority of bond-forming reactions, the absorption spectrum does not allow selecting the optimum wavelength to achieve the highest reaction yield based on absorptivity. This fundamental mismatch between absorptivity and reactivity has been elucidated with so-called photochemical action plots . [ 8 ] [ 9 ]
Continuous-flow photochemistry offers multiple advantages over batch photochemistry. Photochemical reactions are driven by the number of photons that are able to activate molecules causing the desired reaction. The large surface-area-to-volume ratio of a microreactor maximizes the illumination, and at the same time allows for efficient cooling, which decreases the thermal side products. [ 10 ]
In the case of photochemical reactions, light provides the activation energy . Simplistically, light is one mechanism for providing the activation energy required for many reactions. If laser light is employed, it is possible to selectively excite a molecule so as to produce a desired electronic and vibrational state. [ 11 ] Equally, the emission from a particular state may be selectively monitored, providing a measure of the population of that state. If the chemical system is at low pressure, this enables scientists to observe the energy distribution of the products of a chemical reaction before the differences in energy have been smeared out and averaged by repeated collisions.
The absorption of a photon by a reactant molecule may also permit a reaction to occur not just by bringing the molecule to the necessary activation energy, but also by changing the symmetry of the molecule's electronic configuration, enabling an otherwise-inaccessible reaction path, as described by the Woodward–Hoffmann selection rules . A [2+2] cycloaddition reaction is one example of a pericyclic reaction that can be analyzed using these rules or by the related frontier molecular orbital theory.
Some photochemical reactions are several orders of magnitude faster than thermal reactions; reactions as fast as 10 −9 seconds and associated processes as fast as 10 −15 seconds are often observed.
The photon can be absorbed directly by the reactant or by a photosensitizer , which absorbs the photon and transfers the energy to the reactant. The opposite process, when a photoexcited state is deactivated by a chemical reagent, is called quenching .
Most photochemical transformations occur through a series of simple steps known as primary photochemical processes. One common example of these processes is the excited state proton transfer.
Examples of photochemical organic reactions are electrocyclic reactions , radical reactions , photoisomerization , and Norrish reactions . [ 20 ] [ 21 ]
Alkenes undergo many important reactions that proceed via a photon-induced π to π* transition. The first electronic excited state of an alkene lacks the π-bond , so that rotation about the C–C bond is rapid and the molecule engages in reactions not observed thermally. These reactions include cis-trans isomerization and cycloaddition to other (ground state) alkene to give cyclobutane derivatives. The cis-trans isomerization of a (poly)alkene is involved in retinal , a component of the machinery of vision . The dimerization of alkenes is relevant to the photodamage of DNA , where thymine dimers are observed upon illuminating DNA with UV radiation. Such dimers interfere with transcription . The beneficial effects of sunlight are associated with the photochemically-induced retro-cyclization (decyclization) reaction of ergosterol to give vitamin D . In the DeMayo reaction , an alkene reacts with a 1,3-diketone reacts via its enol to yield a 1,5-diketone. Still another common photochemical reaction is Howard Zimmerman 's di-π-methane rearrangement .
In an industrial application, about 100,000 tonnes of benzyl chloride are prepared annually by the gas-phase photochemical reaction of toluene with chlorine . [ 22 ] The light is absorbed by chlorine molecules, the low energy of this transition being indicated by the yellowish color of the gas. The photon induces homolysis of the Cl-Cl bond, and the resulting chlorine radical converts toluene to the benzyl radical:
Mercaptans can be produced by photochemical addition of hydrogen sulfide (H 2 S) to alpha olefins .
Coordination complexes and organometallic compounds are also photoreactive. These reactions can entail cis-trans isomerization. More commonly, photoreactions result in dissociation of ligands, since the photon excites an electron on the metal to an orbital that is antibonding with respect to the ligands. Thus, metal carbonyls that resist thermal substitution undergo decarbonylation upon irradiation with UV light. UV-irradiation of a THF solution of molybdenum hexacarbonyl gives the THF complex, which is synthetically useful:
In a related reaction, photolysis of iron pentacarbonyl affords diiron nonacarbonyl (see figure):
Select photoreactive coordination complexes can undergo oxidation-reduction processes via single electron transfer. This electron transfer can occur within the inner or outer coordination sphere of the metal. [ 23 ]
Here are some different types of photochemical reactions -
Although bleaching has long been practiced, the first photochemical reaction was described by Trommsdorff in 1834. [ 24 ] He observed that crystals of the compound α-santonin when exposed to sunlight turned yellow and burst. In a 2007 study the reaction was described as a succession of three steps taking place within a single crystal. [ 25 ]
The first step is a rearrangement reaction to a cyclopentadienone intermediate ( 2 ), the second one a dimerization in a Diels–Alder reaction ( 3 ), and the third one an intramolecular [2+2] cycloaddition ( 4 ). The bursting effect is attributed to a large change in crystal volume on dimerization.
The organization of these conferences is facilitated by the International Foundation for Photochemistry . [ 26 ] | https://en.wikipedia.org/wiki/Photochemistry |
Photochlorination is a chlorination reaction that is initiated by light. Usually a C-H bond is converted to a C-Cl bond. Photochlorination is carried out on an industrial scale. The process is exothermic and proceeds as a chain reaction initiated by the homolytic cleavage of molecular chlorine into chlorine radicals by ultraviolet radiation . Many chlorinated solvents are produced in this way.
Chlorination is one of the oldest known substitution reactions in chemistry . The French chemist Jean-Baptiste Dumas investigated the substitution of hydrogen for chlorine by acetic acid in candle wax as early as 1830. [ 1 ] He showed that for each mole of chlorine introduced into a hydrocarbon, one mole of hydrogen chloride is also formed and noted the light-sensitivity of this reaction. [ 2 ] The idea that these reactions might be chain reactions is attributed to Max Bodenstein (1913). He assumed that in the reaction of two molecules not only the end product of the reaction can be formed, but also unstable, reactive intermediates which can continue the chain reaction. [ 3 ]
Photochlorination garnered commercial attention with the availability of cheap chlorine from chloralkali electrolysis . [ 4 ]
Chlorinated alkanes found an initial application in pharyngeal sprays. These contained chlorinated alkanes in relatively large quantities as solvents for chloramine T from 1914 to 1918. The Sharpless Solvents Corporation commissioned the first industrial photochloration plant for the chlorination of pentane in 1929. [ 5 ] The commercial production of chlorinated paraffins for use as high-pressure additives in lubricants began around 1930. [ 6 ] Around 1935 the process was technically stable and commercially successful. [ 5 ] However, it was only in the years after World War II that a greater build-up of photochloration capacity began. In 1950, the United States produced more than 800,000 tons of chlorinated paraffin hydrocarbons. The major products were ethyl chloride, tetrachlorocarbon and dichloromethane . [ 7 ] Because of concerns about health and environmentally relevant problems such as the ozone depletion behavior of light volatile chlorine compounds, the chemical industry developed alternative procedures that did not require chlorinated compounds. As a result of the following replacement of chlorinated by non-chlorinated products, worldwide production volumes have declined considerably over the years. [ 6 ] [ 8 ]
Photochlorinations are usually effected in the liquid phase, usually employing chemically inert solvents .
The photochlorination of hydrocarbon is unselective, although the reactivity of the C-H bonds is tertiary>secondary>primary. At 30 °C the relative reaction rates of primary, secondary and tertiary hydrogen atoms are in a relative ratio of approximately 1 to 3.25 to 4.43. The C-C bonds remain unaffected. [ 9 ] [ 10 ]
Upon radiation the reaction involves alkyl and chlorine radicals following a chain reaction according to the given scheme:
Chain termination occurs by recombination of chlorine atoms. [ 11 ] Impurities such as oxygen (present in electrochemically obtained chlorine) also cause chain termination.
The selectivity of photochlorination (with regard to substitution of primary, secondary or tertiary hydrogens) can be controlled by the interaction of the chlorine radical with the solvent, such as benzene , tert -butylbenzene or carbon disulfide . [ 12 ] Selectivity increases in aromatic solvents. [ 13 ] By varying the solvent the ratio of primary to secondary hydrogens can be tailored to ratios between 1: 3 to 1: 31. [ 14 ] At higher temperatures, the reaction rates of primary, secondary and tertiary hydrogen atoms equalize. Therefore, photochlorination is usually carried out at lower temperatures. [ 9 ]
The photochlorination of benzene proceeds also via a radical chain reaction: [ 15 ]
In some applications, the reaction is carried out at 15 to 20 °C. At a conversion of 12 to 15% the reaction is stopped and the reaction mixture is worked up. [ 15 ]
An example of photochlorination at low temperatures and under ambient pressure is the chlorination of chloromethane to dichloromethane . The liquefied chloromethane (boiling point -24 °C) is mixed with chlorine in the dark and then irradiated with a mercury-vapor lamp . The resulting dichloromethane has a boiling point of 41 °C and is later separated by distillation from methyl chloride. [ 16 ]
The photochlorination of methane has a lower quantum yield than the chlorination of dichloromethane. Due to the high light intensity required, the intermediate products are directly chlorinated, so that mainly tetrachloromethane is formed. [ 16 ]
A major application of photochlorination is the production of chloroparaffins . Mixtures of complex composition consisting of several chlorinated paraffins are formed. Chlorinated paraffins have the general sum formula C x H (2 x − y +2) Cl y and are categorized into three groups: Low molecular weight chlorinated paraffins are short chain chloroparaffins (SCCP) with 10 to 13 carbon atoms, followed by medium chain chloroparaffins (MCCP) with carbon chain lengths of 14 to 17 carbon atoms and long chain chlorinated paraffins (LCCP), owing a carbon chainwith more than 17 carbon atoms. Approximately 70% of the chloroparaffins produced are MCCPs with a degree of chlorination from 45 to 52%. The remaining 30% are divided equally between SCCP and LCCP. [ 6 ] Short chain chloroparaffins have high toxicity and easily accumulate in the environment. The European Union has classified SCCP as a category III carcinogen and restricted its use. [ 17 ]
In 1985 the world production was 300,000 tonnes; since then the production volumes are falling in Europe and North America. [ 18 ] In China, on the other hand, production rose sharply. China produced more than 600,000 tonnes of chlorinated paraffins in 2007, while in 2004 it was less than 100,000 tonnes. [ 19 ]
The quantum yield for the photochlorination of n -heptane is about 7000, for example. [ 20 ] In photochlorination plants, the quantum yield is about 100. In contrast to the thermal chlorination, which can utilize the formed reaction energy, the energy required to maintain the photochemical reaction must be constantly delivered. [ 21 ]
The presence of inhibitors, such as oxygen or nitrogen oxides , must be avoided. Too high chlorine concentrations lead to high absorption near the light source and have a disadvantageous effect. [ 14 ]
The photochlorination of toluene is selective for the methyl group. Mono- to trichlorinated products are obtained. The most important of which is the mono-substituted benzyl chloride , which is hydrolyzed to benzyl alcohol. Benzyl chloride can also be converted via benzyl cyanide with subsequent hydrolysis into phenylacetic acid . [ 22 ] [ 23 ] The disubstituted benzal chloride is converted to benzaldehyde , a popular flavorant [ 24 ] and intermediate for the production of malachite green and other dyes. [ 25 ] The trisubstituted benzotrichloride is used for the hydrolysis of the synthesis of benzoyl chloride : [ 26 ]
By reaction with alcohols, benzoyl chloride can be converted into the corresponding esters. With sodium peroxide it turns into dibenzoyl peroxide , a radical initiator for polymerizations. However, the atom economy of these syntheses is poor, since stoichiometric amounts of salts are obtained.
The sulfochlorination first described by Cortes F. Reed in 1936 proceeds under almost identical conditions as the conventional photochlorination. [ 27 ] In addition to chlorine, sulfur dioxide is also introduced into the reaction mixture. The products formed are alkylsulfonyl chlorides , which are further processed into surfactants. [ 28 ]
Hydrochloric acid is formed as a coupling product, as is the case with photochlorination. Since direct sulfonation of the alkanes is hardly possible, this reaction has proven to be useful. Due to chlorine, which is bound directly to the sulfur, the resulting products are highly reactive. As secondary products there are alkyl chlorides formed by pure photochlorination, as well as several sulfochlorinated products in the reaction mixture. [ 29 ]
Photobromination with elemental bromine proceeds analogous to photochlorination also via a radical mechanism. In the presence of oxygen, the hydrogen bromide formed is partly oxidised back to bromine, resulting in an increased yield. Because of the easier dosage of the elemental bromine and the higher selectivity of the reaction, photobromination is preferred over photochlorination at laboratory scale. For industrial applications, bromine is usually too expensive (as it is present in sea water in small quantities only and produced from oxidation with chlorine). [ 30 ] [ 31 ] Instead of elemental bromine, N -bromosuccinimide is also suitable as a brominating agent. [ 32 ] The quantum yield of photobromination is usually much lower than that of photochlorination. | https://en.wikipedia.org/wiki/Photochlorination |
A photochromic lens is an optical lens that darkens on exposure to light of sufficiently high frequency, most commonly ultraviolet (UV) radiation. In the absence of activating light, the lenses return to their clear state. Photochromic lenses may be made of polycarbonate , or another plastic . Glass lenses use visible light to darken. They are principally used in glasses that are dark in bright sunlight, but clear, or more rarely, lightly tinted in low ambient light conditions. They darken significantly within about a minute of exposure to bright light and take somewhat longer to clear. A range of clear and dark transmittances is available. Two kinds of photochromic lenses were popularized, the first being glass containing silver halides. These silver-based lenses became largely obsolete with the introduction of photochromic organic compounds. The other type are plastic, usually polycarbonate combined with photochromic organic compounds. [ 1 ] These processes are reversible; once the lens is removed from strong sources of UV rays the photochromic compounds return to their transparent state.
In the silver-based technology, silver chloride or other silver halides are embedded in the lenses. They are transparent to visible light without significant ultraviolet component, which is normal for artificial lighting. Photochromic lenses were developed by William H. Armistead and Stanley Donald Stookey at the Corning Glass Works Inc. in the 1960s. [ 2 ] The glass version of these lenses achieves their photochromic properties through the embedding of microcrystalline silver halides (usually silver chloride ) in a glass substrate. In glass lenses, when in the presence of UV-A light (wavelengths of 320–400 nm) electrons from the glass combine with the colourless silver cations to form elemental silver. Because elemental silver is visible, the lenses appear darker.
In the shade, this reaction is reversed.
With the photochromic material dispersed in the glass substrate, the degree of darkening depends on the thickness of glass, which poses problems with variable-thickness lenses in prescription glasses.
In another sort of technology, organic photochromic molecules, when exposed to ultraviolet (UV) rays as in direct sunlight, undergo a structural change that causes them to absorb a significant percentage of the visible light, i.e., they darken. Plastic photochromic lenses use oxazines and naphthopyrans to achieve the reversible darkening effect. These lenses darken when exposed to ultraviolet light of the intensity present in sunlight, but not in artificial light. With plastic lenses, the material is typically embedded into the surface layer of the plastic in a uniform thickness of up to 150 μm.
Typically, photochromic lenses darken substantially in response to UV light in less than one minute, and continue to darken a little more over the next fifteen minutes. [ 3 ] The lenses begin to clear in the absence of UV light, and will be noticeably lighter within two minutes, mostly clear within five minutes, and fully back to their non-exposed state in about fifteen minutes. A report by the Institute of Ophthalmology at the University College London suggested that at their clearest photochromic lenses can absorb up to 20% of ambient light. [ 4 ]
Because photochromic compounds fade back to their clear state by a thermal process, the higher the temperature, the less dark photochromic lenses will be. This thermal effect is called "temperature dependency" and prevents these devices from achieving true sunglass darkness in very hot weather. Conversely, photochromic lenses will get very dark in cold weather conditions. Once inside, away from the triggering UV light, the cold lenses take longer to regain their transparency than warm lenses.
A number of sunglass manufacturers and suppliers including INVU, BIkershades, Tifosi, Intercast, Oakley , ZEISS , Serengeti Eyewear , and Persol provide tinted lenses that use photochromism to go from a dark to a darker state. They are typically used for outdoor sunglasses rather than as general-purpose lenses. | https://en.wikipedia.org/wiki/Photochromic_lens |
Photochromism is the reversible change of color upon exposure to light. It is a transformation of a chemical species ( photoswitch ) between two forms through the absorption of electromagnetic radiation ( photoisomerization ), where each form has a different absorption spectrum. [ 1 ] [ 2 ] This reversible structural or geometric change in photochromic molecules affects their electronic configuration , molecular strain energy, and other properties. [ 3 ]
In 1867, Carl Julius Fritzsche reported the concept of photochromism, indicating that orange tetracene solution lost its color in daylight but regained it in darkness. Later, similar behavior was observed by both Edmund ter Meer [ 4 ] and Phipson. [ 5 ] Ter Meer documented the color change of the potassium salt of dinitroethane, which appeared red in daylight and yellow in the dark. Phipson also recorded that a painted gatepost appeared black during the day and white at night due to a zinc pigment, likely lithopone. [ 6 ] [ 7 ] In 1899, Willy Markwald , who studied the reversible color change of 2,3,4,4-tetrachloronaphthalen-1(4H)-one in the solid state, named this phenomenon “phototropy”. [ 8 ] However, this term was later considered misleading due to its association with the biological process “ phototropism ”. In 1950, Yehuda Hirshberg (from the Weizmann Institute of Science in Israel) proposed the term “photochromism,” derived from the Greek words phos (light) and chroma (color), which remains widely used today. [ 6 ] The phenomenon extends beyond colored compounds, encompassing systems that absorb light across a broad spectrum, from ultraviolet to infrared, and includes both rapid and slow reactions. [ 6 ] Photochromism can take place in both organic and inorganic compounds, and also has its place in biological systems (for example retinal in the vision process). The use of photochromic materials has evolved beyond protective eyewear to applications including 3D optical data storage , photocatalysis , and radiation dosimetry . [ 7 ]
Photochromism often is associated with pericyclic reactions , cis-trans isomerizations , intramolecular hydrogen transfer, intramolecular group transfers, dissociation processes and electron transfers ( oxidation-reduction ). [ 6 ] Transition metal complexes can also display photochromic properties due to linkage isomerizations . [ 9 ] [ 10 ] [ 11 ] [ 12 ]
Important properties of photochromic compounds include quantum yield , fatigue resistance, and the lifetime of the photostationary state (PSS). The quantum yield of the photochemical reaction determines the efficiency of the photochromic change relative to the amount of light absorbed. [ 13 ] In photochromic materials, the loss of photochromic component is referred to as fatigue, and it is observed by processes such as photodegradation , photobleaching , photooxidation , and other side reactions. All photochromic compoundss suffer from fatigue to some extent, and its rate is strongly dependent on the activating light and the sample conditions. [ 6 ] Photochromic materials have two states, and their interconversion can be controlled using different wavelengths of light. Excitation with any given wavelength of light will result in a mixture of the two states at a particular ratio, called the photostationary state. In a perfect system, there would exist wavelengths that can be used to provide 1:0 and 0:1 ratios of the isomers, but in real systems this is not possible, since the active absorbance bands always overlap to some extent. [ 13 ]
Photochromic systems rely on irradiation to induce the isomerization. Some rely on irradiation for the reverse reaction, others use thermal activation for the reverse reaction. [ 14 ]
Azobenzene groups incorporated into crown ethers give switchable receptors and azobenzenes. [ 17 ]
Some quinones, and phenoxynaphthacene quinone in particular, have photochromicity resulting from the ability of the phenyl group to migrate from one oxygen atom to another. Quinones with good thermal stability have been prepared, and they also have the additional feature of redox activity, leading to the construction of many-state molecular switches that operate by a mixture of photonic and electronic stimuli. [ 33 ]
Many inorganic substances also exhibit photochromic properties, often with much better resistance to fatigue than organic photochromics. In particular, silver chloride is extensively used in the manufacture of photochromic lenses . Other silver and zinc halides are also photochromic. Yttrium oxyhydride is another inorganic material with photochromic properties. [ 34 ]
Some inorganic photochromic materials include oxides such as BaMgSiO 4 , Na 8 [AlSiO 4 ] 6 Cl 2 , and KSr 2 Nb 5 O 15 . Additionally, rare-earth (RE)-doped compounds like CaF 2 :Ce, CaF 2 :Gd, as well as transition metal oxides such as WO 3 , TiO 2 , V 2 O 5 , and Nb 2 O 5 have been explored. [ 7 ] Photochromism in transition metal oxides is generally attributed to the redox reactions of the transition metal ion and the resulting electron transfer between its different valence states. When electrons are excited from the valence band to the conduction band, a hole is generated in the valence band. This photo-induced hole can decompose adsorbed water on the material’s surface, producing protons. These protons can react with transition metal ions in different valence states, forming hydrogen-based compounds that exhibit color changes. Upon exposure to light of a different wavelength or an oxidizing atmosphere, the reduced transition metal ion can undergo re-oxidation. [ 7 ]
Various forms of tungsten trioxide (WO 3 ), including bulk crystals, thin films, and quantum dots, have been studied for their photochromic properties. WO 3 transitions between two optical states, shifting from transparent to blue when exposed to light, heat, or electricity. The reversible color change is associated with the tungsten center's ability to undergo oxidation-reduction reactions, alternating between different oxidation states (W⁶⁺ to W⁵⁺ or W⁵⁺ to W⁴⁺). [ 35 ] [ 36 ]
Molybdenum trioxide (MoO 3 ) is widely used in UV sensing applications due to its selective absorption of UV light. Upon UV exposure, MoO 3 undergoes a photochromic transformation, which can be reversed in the presence of an oxidizing agent. MoO 3 nanosheets exhibit a stronger photochromic effect than the bulk materials due to enhanced carrier mobility and structural flexibility. [ 37 ] [ 38 ]
Photochromic coordination complexes are relatively rare compared to the organic compounds listed above. There are two major classes of photochromic coordination compounds: those based on sodium nitroprusside and the ruthenium sulfoxide compounds. The ruthenium sulfoxide complexes were created and developed by Rack and coworkers. [ 11 ] [ 12 ] The mode of action is an excited-state isomerization of a sulfoxide ligand on a ruthenium polypyridine fragment from S to O or O to S. The difference in bonding between Ru and S or O leads to the dramatic color change and change in Ru(III/II) reduction potential. The ground state is always S-bonded, and the metastable state is always O-bonded. Typically, absorption maxima changes of nearly 100 nm are observed. The metastable states (O-bonded isomers) of this class often revert thermally to their respective ground states (S-bonded isomers), although a number of examples exhibit two-color reversible photochromism. Ultrafast spectroscopy of these compounds has revealed exceptionally fast isomerization lifetimes ranging from 1.5 nanoseconds to 48 picoseconds. [ 12 ]
Reversible photochromism is the basis of color changing lenses for sunglasses . The largest limitation in using photochromic technology is that the materials cannot be made stable enough to withstand thousands of hours of outdoor exposure so long-term outdoor applications are not appropriate at this time.
The switching speed of photochromic dyes is highly sensitive to the rigidity of the environment around the dye. As a result, they switch most rapidly in solution and slowest in the rigid environment like a polymer lens. [ 39 ] In 2005 it was reported that attaching flexible polymers with low glass transition temperature (for example siloxanes or polybutyl acrylate) to the dyes allows them to switch much more rapidly in a rigid lens. Some spirooxazines with siloxane polymers attached switch at near solution-like speeds even though they are in a rigid lens matrix. [ 40 ]
Photochromic compounds for data storage has long been a topic of speculation. [ 41 ] The area of 3D optical data storage promises discs that can hold a terabyte of data. [ 42 ]
Photochromism is a potential mechanism to store solar energy. The photochromic dihydroazulene–vinylheptafulvene system is a proof-of-concept. [ 43 ] | https://en.wikipedia.org/wiki/Photochromism |
Photoconductive atomic force microscopy ( PC-AFM ) is a variant of atomic force microscopy that measures photoconductivity in addition to surface forces.
Multi-layer photovoltaic cells have gained popularity since mid 1980s. [ 1 ] At the time, research was primarily focused on single-layer photovoltaic (PV) devices between two electrodes, in which PV properties rely heavily on the nature of the electrodes. In addition, single layer PV devices notoriously have a poor fill factor . This property is largely attributed to resistance that is characteristic of the organic layer. The fundamentals of pc-AFM are modifications to traditional AFM and focus on the use of pc-AFM in PV characterization. In pc-AFM the major modifications include: a second illumination laser, an inverted microscope and a neutral density filter. These components assist in the precise alignment of the illumination laser and the AFM tip within the sample. Such modifications must complement the existing principals and instrumental modules of pc-AFM so as to minimize the effect of mechanical noise and other interferences on the cantilever and sample.
The original exploration of the PV effect can be accredited to research published by Henri Becquerel in 1839. [ 2 ] Becquerel noticed the generation of a photocurrent after illumination when he submerged platinum electrodes within an aqueous solution of either silver chloride or silver bromide . [ 3 ] In the early 20th century, Pochettino and Volmer studied the first organic compound, anthracene , in which photoconductivity was observed. [ 2 ] [ 4 ] [ 5 ] Anthracene was heavily studied due to its known crystal structure and its commercial availability in high-purity single anthracene crystals. [ 6 ] [ 7 ] The studies of photoconductive properties of organic dyes such as methylene blue were initiated only in the early 1960s owing to the discovery of the PV effect in these dyes. [ 8 ] [ 9 ] [ 10 ] In further studies, it was determined that important biological molecules such as chlorophylls , carotenes , other porphyrins as well as structurally similar phthalocyanines also exhibited the PV effect. [ 2 ] Although many different blends have been researched, the market is dominated by inorganic solar cells which are slightly more expensive than organic based solar cells. The commonly used inorganic based solar cells include crystalline , polycrystalline , and amorphous substrates such as silicon , gallium selenide , gallium arsenide , copper indium gallium selenide and cadmium telluride .
With the high demand of cheap, clean energy sources persistently increasing, organic photovoltaic (OPV) devices (organic solar cells), have been studied extensively to help in reducing the dependence on fossil fuel and containing the emission of green house gases (especially CO 2 , NO x , and SO x ). This global demand for solar energy increased 54% in 2010, while the United States alone has installed more than 2.3 GW of solar energy sources in 2010. [ 11 ] Some of the attributes which make OPVs such a promising candidate to solve this problem include their low-cost of production, throughput, ruggedness, and their chemically tunable electric properties along with significant reduction in the production of greenhouse gases . [ 12 ] For decades, the researchers have believed that the maximum power conversion efficiency (PCE) would most likely remain below 0.1%. [ 2 ] Only in 1979 Tang reported a two-layer, thin-film PV device, which ultimately yielded a power conversion efficiency of 1%. [ 1 ] Tang's research was published in 1986, which allowed others to decipher many of the problems which limited the basic understanding of the process involved in the OPVs. In later years, the majority of the research focused on the composite blend of poly(3-hexylthiopehene) ( P3HT ) and phenyl-C61-butyric acid methyl ester (PCBM). This, along with the research performed on fullerenes , dictated the majority of studies pertaining to OPV for many years. [ 12 ] [ 13 ] [ 14 ] [ 15 ] [ 16 ] [ 17 ] [ 18 ] In more recent research, polymer-based bulk heterojunction solar cells, along with low band-gap donor-acceptor copolymers have been created for PCBM-based OPV devices. [ 13 ] [ 14 ] These low band-gap donor-acceptor copolymers are able to absorb a higher percentage of the solar spectrum as compared to other high efficiency polymers. [ 14 ] These copolymers have been widely researched due to their ability to be tuned for specific optical and electrical properties. [ 14 ] To date, the best OPV devices have a maximum power conversion efficiency of approximately 8.13%. [ 19 ] This low power conversion efficiency is directly related to discrepancies in the film morphology on the nano-scale level. Explanations of film morphology include recombination and/or trapping of charges, low open circuit voltages, heterogeneous interfaces, grain boundaries , and phase-separated domains. [ 14 ] [ 20 ] [ 21 ] [ 22 ] [ 23 ] [ 24 ] [ 25 ] [ 26 ] Many of these problems arise from the deficient knowledge of electro-optical properties on the nano-scale level. In numerous studies, it has been observed that heterogeneities in the electrical and optical properties influence device performance. [ 12 ] These heterogeneities which occur in OPVs are a result the manufacturing process, such as annealing time, which is explained below. Research has mainly consisted of discovering exactly how this film morphology affects the device performance.
Until recently, microscopy methods used in the characterization of these OPVs consisted of atomic force microscopy (AFM), transmission electron microscopy (TEM) and scanning transmission X-ray microscopy (STXM). [ 27 ] These methods are very useful in the identification of the local morphology on the film surface, but lack the ability to provide fundamental information regarding local photocurrent generation and ultimately on the device performance. To obtain information which links the electrical and optical properties, the use of electrical scanning probe microscopy (SPM) is an active area of research. Electrostatic force microscopy (EFM) and scanning Kelvin probe microscopy (SKPM) have been utilized in the studies of electron injection and charge trapping effects, while scanning tunneling microscopy (STM) and conductive atomic force microscopy (c-AFM) have been used to investigate electron transport properties within these organic semiconductors . [ 4 ] [ 27 ] [ 28 ] [ 29 ] [ 30 ] [ 31 ] [ 32 ] [ 33 ] Conductive AFM has been widely used in characterizing the local electric properties in both photovoltaic fullerene blends and organic films, but no reports have shown the use of c-AFM to display the distribution of photocurrents in organic thin films. [ 27 ] The most recent variation of SPM devices include (tr-EFM) and photoconductive AFM (pc-AFM) . [ 27 ] Both these techniques are capable of obtaining information regarding photo-induced charging rates with nano-scale resolution. [ 27 ] The advantage of pc-AFM over tr-ERM is present in the maximum obtainable resolution by each method. pc-AFM can map photocurrent distributions with approximately 20 nm resolution, whereas tr-EFM was only able to obtain between 50 and 100 nm resolution at this time. [ 27 ] Another important factor to note is although the tr-EFM is capable of characterizing thin films within organic solar cells, it is unable to provide the needed information regarding the capacitance gradient nor the surface potential of the thin film. [ 34 ]
The origin of PC-AFM is due to the work performed by Gerd Binnig and Heinrich Rohrer on STM for which they were awarded the Nobel Prize in physics in 1986. They fabricated an instrument called scanning tunneling microscope (STM) and demonstrated that STM provides surface topography on the atomic scale. [ 35 ] This microscopy technique yielded resolutions which were nearly equal to scanning electron microscopy (SEM). [ 35 ]
The fundamental principles of photoconductive atomic force microscopy (pc-AFM) are based on those of traditional atomic force microscopy (AFM) in that an ultrafine metallic tip scans the surface of a material to quantify topological features. [ 36 ] [ 37 ] [ 38 ] [ 39 ] [ 40 ] [ 41 ] The working premises for all types of AFM techniques are largely dependent on the fundamentals of the AFM cantilever, metallic tip, scanning piezo-tube and the feedback loop that transfers information from lasers that guide the motion of the probe across the surface of a sample. The ultra-fine dimensions of the tip and the way the tip scans the surface produces lateral resolutions of 500 nm or less. In AFM, the cantilever and tip functions as a mass on a spring. When a force acts on the spring (cantilever), the spring response is directly related to the magnitude of the force. [ 37 ] [ 38 ] k is defined as the force constant of the cantilever.
Hooke's law for cantilever motion: [ 37 ] [ 38 ]
f = − k d {\displaystyle f=-kd}
The forces acting on the tip are such that the spring (cantilever) remains soft but responds to the applied force, with a detectable resonant frequency, f o . In Hooke's law, k is the spring constant of the cantilever and m o is defined as the mass acting on the cantilever: the mass of the cantilever itself and the mass of the tip. The relationship between f o and the spring constant is such that k must be very small in order to make the spring soft. Since k and m o are in a ratio, the value of m o must also decrease to increase the value of the ratio. Manipulating the values in this way provides the necessary high resonance frequency. A typical m o value has a magnitude of 10 −10 kg and creates an f o of approximately 2 kHz. [ 40 ]
Expression for resonant frequency of a spring:
f o = 1 2 π k m o {\displaystyle f_{o}={\frac {1}{2\pi }}{\sqrt {\frac {k}{m_{o}}}}}
Several forces affect the behavior of the cantilever : attractive and repulsive Van der Waals forces , and electrostatic repulsion . [ 38 ] Changes in these forces are monitored by a guide laser that is reflected off the back of the cantilever and detected by a photodetector . [ 36 ] [ 37 ] Attractive forces between the atoms on the sample surface and the atom at the AFM tip draw the cantilever tip closer to the surface. [ 18 ] When the cantilever tip and the sample surface come within a range of a few angstroms repulsive forces come into play as a result of electrostatic interactions . [ 38 ] [ 41 ] There is also a force exerted from the cantilever pressing down on the tip. The magnitude of the force exerted by the cantilever is dependent upon the direction of its motion, whether it is attracted or repelled from the sample surface [ 38 ] When the tip of the cantilever and the surface come into contact, the single atom at the point of the tip and the atoms on the surface exhibit a Lennard-Jones potential . The atoms exhibit attractive forces until a certain point and then experience repulsion from one another. The term r o is the separation at which the sum of the potentials between the two atoms is zero [ 38 ] [ 41 ]
Force on AFM tip in terms of Lennard-Jones potential : [ 38 ] [ 41 ]
f = − d V d r = 24 ε o r o [ 2 r o r 12 − r o r 6 ] {\displaystyle f={-\mathrm {d} V \over \mathrm {d} r}={24\varepsilon _{o} \over r_{o}}\left[{2}{r_{o} \over r}^{12}-{r_{o} \over r}^{6}\right]}
Modifications of this early work have been implemented to perform AFM analysis on both conducting and non-conducting materials. Conductive atomic force microscopy (c-AFM) is one such modification technique. The c-AFM technique operates by measuring fluctuations in current from the biased tip and sample while simultaneously measuring changes in the topographical features. [ 12 ] In all techniques of AFM, two modes of operation can be used: contact mode and non-contact mode. [ 36 ] In c-AFM resonant contact mode is used to obtain topographical from current that is measured between the biased AFM tip and the sample surface. [ 12 ] In this type of operation, the current is measured in the small space between the tip and the sample surface. [ 12 ] This quantification is based on the relationship between the current traveling through the sample and layer thickness. [ 42 ] In the previous equation, A eff is the effective emission area at the injecting electrode, q is the electron charge, h is the Planck constant, m eff / m 0 = 0.5, which is the effective mass of an electron in the conduction band of a sample, d is the sample thickness and Φ is the barrier height. [ 42 ] The symbol, β , the field enhancement factor, accounts for the non-planar, geometry of the tip used. [ 42 ]
Relationship between conducting current and sample layer thickness: [ 42 ]
I = A eff ( q 2 m o 8 π h m eff ) ( 1 t ( E 2 ) ) ( β 2 V 2 ϕ d 2 ) e ( ( ( 8 π ) ( 2 m eff q ) 1 2 ( 3 h ) ) ( ν ( E ) ) ( d β V ) ( ϕ 1 3 ) ) {\displaystyle I=A_{\text{eff}}\left({\frac {q^{2}m_{o}}{8\pi hm_{\text{eff}}}}\right)\left({\frac {1}{t\left(E^{2}\right)}}\right)\left({\frac {\beta ^{2}V^{2}}{\phi d^{2}}}\right)e^{\left(\left({\frac {\left(8\pi \right)\left(2m_{\text{eff}}q\right)^{\frac {1}{2}}}{\left(3h\right)}}\right)\left(\nu \left(E\right)\right)\left({\frac {d}{\beta V}}\right)\left(\phi ^{\frac {1}{3}}\right)\right)}}
The accuracies of all AFM techniques rely heavily on a sample scanning tube, the piezo-tube. The piezo-tube scanner is responsible for the direction of tip displacement during a sample analysis, and is dependent on the mode of analysis. The piezo components are either arranged orthogonally or manufactured as a cylinder. [ 36 ] [ 37 ] In all techniques, sample topography is measured by the movement of the x and y piezos. When performing non-contact mode pc-AFM, the piezo-tube keeps the probe from moving in the x and y direction and measures the photocurrent between the sample surface and conducting tip in the z-direction. [ 36 ] [ 37 ]
The principles of the piezo-tube is dependent upon how the piezo-electric material reacts with an applied voltage to either the interior or exterior of the tube. When voltage is applied to the two electrodes connected to the scanner, the tube will expand or contract causing motion to the AFM tip in the direction of this movement. This phenomenon is illustrated as the piezo-tube becomes displaced by an angle, θ. As the tube is displaced, the sample that, in traditional AFM is fixed to the tube generates lateral translation and rotation relative to the AFM tip, thus movement of the tip is generated in the x and y directions [ 43 ] When voltage is applied of the inside of the tube, movement in the z-direction is implemented.
The relationship between the movement of the piezo-tube and the direction of the displacement of the AFM tip assumes that the tube is perfectly symmetric. [ 43 ] When no voltage is applied to the tube the z-axis bisects the tube, sample and sample stage symmetrically. When a voltage is applied to the exterior of the tube (x and y motion), the expansion of the tube can be understood as a circular arc. In this equation, the r term indicates the outside radius of the piezo-tube, R is the curvature radius of the tube with applied voltage, θ is the bend angle of the tube, L is the initial length of the tube and ΔL is the extension of the tube after the voltage is applied. [ 43 ] The change in length of the piezo-tube, ΔL , is expressed as the intensity of the electric field applied to the exterior of the tube, the voltage along the x-axis, U x , and the thickness of the wall of the tube.
Expressions for bend geometry of piezo-tube: [ 43 ]
L − Δ L = ( R − r ) Θ {\displaystyle L-\Delta L=\left(R-r\right)\Theta }
L + Δ L = ( R + r ) Θ {\displaystyle L+\Delta L=\left(R+r\right)\Theta }
Length displacement in terms of exterior electric field: [ 43 ]
Δ L = E d 31 = ( d 31 L t ) U x {\displaystyle \Delta L=Ed_{31}=\left({\frac {d_{31}L}{t}}\right)U_{x}}
Expression for tube displacement, θ : [ 43 ]
Θ = L R = ( d 31 L t r ) U x {\displaystyle \Theta ={\frac {L}{R}}=\left({\frac {d_{31}L}{t_{r}}}\right)U_{x}}
With the calculation of θ , the displacement of the probe in the x and z directions can be calculated as:
Expressions for probe displacement in the x- and z-directions: [ 43 ]
d x = ( R + χ ) ( 1 − c o s Θ ) + ( D s s + D s p ) U x {\displaystyle dx=(R+\chi )\left(1-cos\Theta \right)+\left(D_{ss}+D_{sp}\right)U_{x}}
d z = ( ( R + χ ) s i n Θ − L ) + ( D s s + D s p ) ( c o s Θ − 1 ) {\displaystyle dz=\left(\left(R+\chi \right)sin\Theta -L\right)+\left(D_{ss}+D_{sp}\right)\left(cos\Theta -1\right)}
Another fundamental concept of all AFM is the feedback loop . The feedback loop is particularly important in non-contact AFM techniques, particularly in pc-AFM. As previously mentioned, in non-contact mode the cantilever is stationary and the tip does not come into physical contact with the sample surface. [ 36 ] The cantilever behaves as a spring and oscillates at its resonance frequency. Topological variance causes the spring-like oscillations of the cantilever to change amplitude and phase in order to prevent the tip from colliding with sample topographies. [ 37 ] The non-contact feedback loop is used to control that changes in the oscillations of the cantilever. [ 37 ] The application of AFM on non-conducting samples (c-AFM) has in recent years evolved into the modification used for analysis of morphologies on the local scale, particularly morphologies at heterojunctions of multilayered samples. [ 12 ] [ 18 ] [ 44 ] [ 45 ] [ 46 ] Photoconductive atomic force microscopy (pc-AFM) is particularly prevalent in the development of organic photovoltaic devices (OPV). [ 12 ] [ 45 ] [ 46 ] The fundamental modification of c-AFM to pc-AFM is the addition of an illumination source and an inverted microscope that focuses the laser to a nanometer-scale point directly underneath the conductive AFM tip. [ 18 ] [ 44 ] The main concept of the illumination laser point is that it must be small enough to fit within the confines of ultra-thin films. These characteristics are achieved by using a monochromatic light source and a laser filter. [ 18 ] [ 44 ] In the OPV application, applying the illumination laser to the confines of ultra-thin films is further assisted by the recent development of the bulk heterojunction (BHJ) mixture of electron donating and accepting material in the film. [ 46 ] The combination of the conductive tip and illumination laser provides photocurrent images with vertical resolutions in the range of 0 to 10 pA when overlaid with the topographical data obtained. [ 18 ] [ 44 ] [ 47 ] Also unique to this modification are the spectra data gathered by comparing the current between the tip and sample to a variety of parameters including: laser wavelength, applied voltage and light intensity. [ 44 ] The pc-AFM technique was also reported to detect local surface oxidation at a vertical resolution of 80 nm. [ 42 ]
The instrumentation involved for pc-AFM is very similar to that necessary for traditional AFM or the modified conductive AFM. The main difference between pc-AFM and other types of AFM instruments is the illumination source that is focused through the inverted microscope objective and the neutral density filter that is positioned adjacent to the illumination source. [ 12 ] [ 18 ] [ 44 ] [ 47 ] The technical parameters of pc-AFM are identical to those of traditional AFM techniques. [ 12 ] [ 18 ] [ 36 ] [ 44 ] [ 47 ] This section will focus on the instrumentation necessary for AFM and then detail the requirements for pc-AFM modification.
The main instrumental components to all AFM techniques are the conductive AFM cantilever and tip, the modified piezo components and the sample substrate. [ 36 ] [ 48 ] The components for photoconductive modification include: the illumination source (532 nm laser), filter and inverted microscope. When modifying traditional AFM for pc application, all components must be combined such that they do not interfere with one another and so that various sources of noise and mechanical interference do not disrupt the optical components. [ 48 ]
In traditional instrumentation, the stage is a cylindrical piezo-tube scanner that minimizes the effect of mechanical noise . [ 48 ] [ 49 ] Most cylindrical piezos are between 12 and 24 mm in length and 6 and 12 mm in diameter. [ 25 ] The exterior of the piezo-tube is coated with a thin layer of conducting metal so that this region can sustain an electric field . [ 25 ] The interior of the cylinder is divided into four regions (x and y regions) by non-conducting metallic strips. [ 36 ] [ 49 ] Electrical leads are fixed to one end and the exterior wall of the cylinder so that a current can be applied. When a voltage is applied to the exterior, the cylinder expands in x and y direction. Voltage along the interior of the tube causes cylinder expansion in the z-direction and thus movement of the tip in the z-direction. [ 36 ] [ 48 ] [ 49 ] The placement of the piezo tube is dependent upon the type of AFM performed and the mode of analysis. However the z-piezo must always be fixed above the tip and cantilever to control the z-motion. [ 37 ] This configuration is most often seen in the c-AFM and pc-AFM modifications to make room for additional instrumental components which are placed below the scanning stage. [ 48 ] This is particularly true for pc-AFM, which must have the piezo-components arranged above the cantilever and tip so that the illumination laser can transmit through the sample. [ clarification needed ] with applied voltage [ 50 ]
In some configurations, the piezo components can be arranged in a tripod design. In this type of set-up, the x, y and z components are arranged orthogonally to one another with their apex attached to a movable pivot point. [ 37 ] Similar to the cylindrical piezo, in the tripod design the voltage is applied to the piezo corresponding to the appropriate direction of tip displacement. [ 37 ] In this type of set-up the sample and substrate are mounted on top of the z-piezo component. When the x and y piezo components are in use, the orthogonal design causes them to push against the base of the z-piezo, causing the z-piezo to rotate about a fixed point. [ 37 ] Applying voltage to the z-piezo causes the tube to move up and down on its pivot point. [ 37 ]
The other essential components of AFM instrumentation include the AFM tip module, which includes: the AFM tip, the cantilever, and the guiding laser. [ 36 ] When the piezo-tube is positioned above the cantilever and tip, the guiding laser is focused through the tube and onto a mirror that rests on tip of the cantilever. [ 51 ] The guiding laser is reflected off of the mirror and detected by a photodetector. The laser senses when the forces acting on the tip change. The reflected laser beam from this phenomenon reaches the detector . [ 36 ] [ 49 ] The output from this detector acts as a response to the changes in force and the cantilever adjusts the position of the tip, while keeping constant the force that acts on the tip. [ 36 ] [ 49 ] [ 51 ]
The instrumentation of conductive AFM (c-AFM) has evolved with the desire to measure local electrical properties of materials with high resolutions. The essential components are: the piezo-tube, the guide laser, the conducting tip, and cantilever. Although these components are identical to traditional AFM their configuration is tailored to measuring surface currents on the local scale.
As mentioned previously, the piezo-tube can be placed either above or below the sample, depending on the application of the instrumentation. In the case of c-AFM, repulsive contact mode is the predominantly used to obtain electric current images from the surface as the sample moves in the x and y direction. Placing the z-piezo above the cantilever allows for better control of the cantilever and tip during analysis. [ 37 ] The material that comprises the conductive tip and cantilever can be customized for a particular application. Metal-coated cantilevers, gold wires, all-metal cantilevers and diamond cantilevers are used. [ 52 ] In many cases diamond is the preferred material for cantilever and/or tip because it is an extremely hard material that does not oxidize in ambient conditions. [ 52 ] The main difference between the instrumentation of c-AFM and STM is that in c-AFM the bias voltage can be directly applied to the nanostructure (tip and substrate). [ 53 ] In STM, on the other hand, the applied voltage must be supported within the vacuum tunneling gap between the STM probe and surface. [ 36 ] [ 53 ] When the tip is in close contact with the sample surface the application of bias voltage to the tip creates a vacuum gap between the tip and the sample that enables the investigation of electron transport through nanostructures. [ 53 ]
The main components and instrumentation of c-AFM instrumentation are identical to that required for a pc-AFM module. The only modifications are the illumination source, filter and inverted microscope objective that are located beneath the sample substrate. In fact, most pc-AFM instruments are simply modified from existing cp-AFM instrumentation. The first report of this instrumental modification came in 2008. In that paper, Lee and coworkers implemented the aforementioned modifications to examine the resolution of photocurrent imaging. Their design consisted of three main units: a conductive mirror plate, steering mirror and laser source.
The main difficulty with the previously existing c-AFM instrumentation is the inability of the technique for characterizing photonic devices. [ 55 ] Specifically, it is difficult to measure changes in local and nano-scale electrical properties that result from the photonic effect. [ 55 ] The optical illumination component (laser) was added to the c-AFM module in order to make such properties visible. Early in development, the main concerns regarding pc-AFM include: physical configuration, laser disturbance and laser alignment. [ 55 ] Although many of these concerns have been resolved pc-AFM modules are still widely modified from c-AFM and traditional AFM instruments.
The first main concern deals with component configuration and whether or not there is physically enough space for modification in the cramped c-AFM module. The component configuration must be such the addition of the laser illumination component does not cause disturbance to other units. [ 55 ] [ 56 ] Interaction between the illumination laser and the guiding laser was also a concern. First attempts to address these two issues was to place a prism between the sample tip and the surface such that the prism would allow the illumination laser to reflect at the interface between the prism and the laser and thus be focused to a localized spot on the sample surface. [ 45 ] [ 55 ] However, lack of space for the prism and the production of multiple light reflections when introducing a prism required a different concept for configuration.
The module constructed by Lee et al. implemented a tilted mirror plate that was positioned underneath the sample substrate. This conductive mirror was tilted at 45° and successfully reflected the illuminating laser to a focused spot directly underneath the conductive tip. [ 55 ] The steering mirror was employed as a means of controlling the trajectory of the laser source, with this addition the position of the reflected beam on the sample could be easily adjusted for placement underneath the AFM tip. [ 55 ] The illumination laser source was a diode-pumped solid-state laser system that produced a wavelength of 532 nm and a spot of 1 mm in the sample.
The addition of the mirror and laser underneath the sample substrate results in a higher scanning level due to raising the sample substrate. This configuration has no effect on any other instrument component and does not affect AFM performance. [ 55 ] This result was confirmed by identical topographical images that were taken with and without the placement of the mirror and laser. This particular set-up required the separation of the x, y and z piezo-scanners The separation of piezo-tubes accounts for the elimination of x-z cross-coupling and scanning-size errors, which is common in traditional AFM. [ 55 ]
In addition there was no evidence of laser interferences between the guiding laser and the irradiation laser. The guiding laser, at a wavelength of 650 nm, hits the mirror on the back of the conducting cantilever from vertical trajectory and is reflected away from the cantilever towards the position sensitive photodetector (PSPD). [ 55 ] The illumination beam, on the other hand, travels from underneath the sample platform and is reflected into position by the reflecting mirror. The angle of the mirror plate ensures that the beam does not extend past the sample surface. [ 55 ]
The conductive AFM tip was easily aligned over the reflected illumination beam. The laser spot in the sample was reported to be 1mm in size and can be found using the AFM recording device. [ 55 ] A convenience of this technique is that laser alignment is only necessary for imaging in the z-direction because the photocurrents are mapped in this direction. [ 55 ] Therefore, normal AFM/c-AFM can be implemented for analysis in the x and y directions.
The instrumental module proposed by Lee et al. produced spot sizes from the illumination laser of 1 mm in thicknesses. Recent applications have altered Lee's design in order to decrease spot size while simultaneously increasing the intensity of this laser. Recent instrumentation has replaced the angled mirror with an inverted microscope and a neutral density filter. [ 12 ] [ 18 ] [ 44 ] [ 46 ] [ 47 ] In this device the x and y piezos, illumination laser and inverted microscopy are confined underneath the sample substrate, while the z-piezo remains above the conductive cantilever. [ 12 ] [ 18 ] [ 44 ] [ 46 ] [ 47 ] [ 57 ] In the applications of Ginger et al. a neutral-density filter is added to increase laser attenuation and the precision of laser alignment is enhanced by the addition of the inverted microscope.
One of the most common pc-AFM setups incorporates a light source, which emits in the visible spectrum along with an indium tin oxide (ITO) semi-conductive layer (used as the bottom cathode ). [ 2 ] The use of a gold plated silicon AFM probe is often used as the top anode in pc-AFM studies. This electrode which carries relatively small current within it, is able to generate nano-scale holes within the sample material to which the two electrodes are able to detect the relatively small change in conductance due to the flow from the top electrode to the bottom electrode. [ 44 ] The combination of these elements produced laser intensities in the range of 10 to 108 W/m 2 and decreased the size of the laser spot to sub-micrometer dimensions making this technique useful for the application of nm thin OPV films. [ 12 ] [ 46 ] [ 57 ]
Although there is significant insight as to how OPVs work, it is still difficult to relate the device's functionality to local film structures. [ 27 ] This difficulty may be attributed to the minimal current generation at a given point within OPVs. [ 12 ] Through pc-AFM, OPV devices can be probed at nano-scale level and can help to increase our fundamental knowledge of mechanisms involved in OPVs at nano-scale level. [ 47 ] pc-AFM is capable of gathering information such as the mapping of photocurrents, differences in film morphology, determination of donor-acceptor domains, current density-voltage plots, quantum efficiencies, and approximate charge carrier mobilities. [ 12 ] [ 16 ] [ 46 ] [ 47 ] [ 58 ] [ 59 ] [ 60 ] [ 61 ] [ 62 ] [ 63 ] One of the other notable characteristics of pc-AFM is its ability to provide concurrent information regarding the topological and photocurrent properties of the device at nano-scale. [ 17 ] Using this concurrent sampling method, the sample handling is minimized and can provide more accurate results. In a study by Pingree et al., pc-AFM was used to measure how spatial deviations in the photocurrent generation developed with different processing techniques. [ 16 ] The authors were able to compare these photocurrent variations to the duration of the annealing process. [ 16 ] They have concluded that lengthening the annealing time allows for improved nano-scale phase separation as well as created a more ordered device. [ 16 ] Actual times for the annealing process vary depending on the properties of the polymers used. [ 16 ] The authors have shown that external quantum efficiency (EQE) and power conversion efficiency (PCE) levels reach a maximum at certain annealing times whereas while the electron and hole mobility's do not show the corresponding trends. [ 16 ] Therefore, while lengthening the annealing time can increase the photocurrents within the OPV, there is a practical limit to after which the benefits may not be substantial. [ 16 ] Besides functional properties, pc-AFM can also be used to interrogate the composition heterogeneity of OPVs when combined with either Raman or infrared (IR) spectroscopy, and it is especially valuable for studying their degradation. [ 64 ]
In more recent studies, pc-AFM has been employed to gather information regarding the photoactive regions from the use of quantum dots . [ 65 ] Because if their relative ease of use, along with size-tunable excitation attributes, quantum dots have commonly been applied as sensitizers in optoelectronic devices. [ 65 ] The authors have studied the photoresponse of sub-surface foundations such as buried indium arsenide (InAs) quantum dots through the implementation of pc-AFM. [ 65 ] Through the use of pc-AFM, information regarding quantum dot size, as well as the dispersion of quantum dots within the device, can be recorded in a non-destructive manner. [ 65 ] This information can then be used to display local variances in photoactivity relating to heterogeneities within the film morphology. [ 65 ]
Sample preparation of the OPV is of the utmost importance when performing pc-AFM studies. The sampling substrate is recommended to be conductive, as well as transparent, to the light source which is irradiated upon it. [ 66 ] Numerous studies have used ITO -coated glass as their conductive substrate. Because of high cost of ITO, however, there have been attempts to utilize other semiconducting layers, such as zinc oxide (ZnO) and carbon nanotubes as an alternative to ITO. [ 21 ] [ 55 ] Although these semiconductors are relatively inexpensive, high quality ITO layers are still being used extensively for PV applications. Poly(3,4-ethylenedioxythiophene) poly(styrenesulfonate), more commonly known as PEDOT:PSS , is a transparent, polymeric conductive layer which is usually placed between the ITO and the active OPV layer. The PEDOT:PSS is a conductive polymer is stable over various applied charges. [ 67 ] In most studies, PEDOT:PSS is spin-coated onto the ITO-coated glass substrates directly after plasma cleaning of the ITO. [ 66 ] Plasma cleaning, as well as halo-acid etching, have been shown to improve the surface uniformity and conductivity of the substrate. [ 12 ] This PEDOT:PSS layer is then annealed to the ITO prior to spin-coating the OPV layer onto the substrate. Studies by Pingree et al. have shown the direct correlation between annealing time and both peak and average photocurrent generation. [ 16 ] Once this OPV film is spin-coated onto the substrate, it is then annealed at temperatures between 70 and 170 °C, for periods up to an hour depending on the procedure as well as OPV being used. [ 13 ] [ 14 ] [ 15 ] [ 16 ] [ 18 ] [ 20 ] [ 66 ] [ 67 ]
A recently developed OPV system based on tetrabenzoporphryin (BP) and either [6,6]-phenyl-C 61 -butyric acid methyl ester (PCBM) is explained in detail as follows. [ 67 ] In this study, the precursor to BP (1,4:8,11:15,18:22,25-tetraethano-29H,31H-tetrabenzo[b,g,l,q]porphyrin (CP) solution is applied as the starting film, and was thermally annealed which caused the CP to convert into BP. [ 67 ] The BP:fullerene layer serves as the undoped layer within the device. For surface measurements, the undoped layer is rinsed with a few drops of chloroform and spin-dried until the BP network is exposed at the donor/acceptor interface. [ 67 ] For bulk heterojunction characterization, an additional fullerene solution is spin-coated onto the undoped layer, a thin layer of lithium fluoride is then deposited followed by either an aluminum or gold cathode which is thermally annealed to the device. [ 13 ] [ 15 ] [ 20 ] [ 67 ] The thin layer of lithium fluoride is deposited to help prevent the oxidation of the device. [ 68 ] Controlling the thickness of these layers plays a significant role in the generation of the efficiency of the PV cells. Typically, the thickness of the active layers is usually smaller than 100 nm to produce photocurrents. This dependence on layer thickness is due to the probability that an electron is able to travel distances on the order of exciton diffusion length within the applied electric field. Many of the organic semiconductors used in the PV devices are sensitive to water and oxygen. [ 12 ] This is due to the likelihood of photo-oxidation which can occur when exposed to these conditions. [ 12 ] While the top metal contact can prevent some of this, many studies are either performed in an inert atmosphere such as nitrogen, or under ultra-high vacuum (UHV). [ 12 ]
Once the sample preparation is complete, the sample is placed onto the scanning stage of the pc-AFM module. This scanning stage is used for x-y piezo translation, completely independent of the z-direction while using a z-piezo scanner. The piezo-electric material within this scanner converts a change in the applied potential into mechanical motion which moves the samples with nanometer resolution and accuracy. There are two variations in which the z-piezo scanner functions; one is contact mode while the other is tapping mode.
Many commercial AFM cantilever tips have pre-measured resonant frequencies and force constants which are provided to the customer. As sampling proceeds, the cantilever tip's position changes, which causes the scanning laser wavelength (650 nm) to deviate from its original position on the detector. [ 32 ] [ 66 ] The z-piezo scanner then recognizes this deviation and moves vertically to return the laser spot to its set position. [ 32 ] This vertical movement by the z-piezo scanner is correlated to a change in voltage. [ 32 ] Sampling in contact mode relies upon intermolecular forces between the tip and surface as depicted by Van der Waals force . As the sampling begins, the tip is moved close to the sample which creates a weakly attractive force between them. Another force which is often present in contact mode is capillary force due to hydration on the sample surface. This force is due to the ability of the water to contact the tip, thus creating an undesirable attractive force. Capillary force , along with several other sources of tip contamination, are key factors in the decreased resolution observed while sampling
There are considerations which need to be taken into account when determining which mode is optimal for sampling for a given application. It has been shown that sampling in contact mode with very soft samples can damage the sample and render it useless for further studies. [ 20 ] Sampling in non-contact mode is less destructive to the sample, but the tip is more likely to drift out of contact with the surface and thus it may not record data. [ 32 ] Drifting of the tip is also seen due to piezo hysteresis, which causes displacement due to molecular friction and polarization effects due to the applied electric field.
It is important to note the correlation between resolution and curvature of tip radius. Early STM tips used by Binning and Rohrer were fairly large, anywhere between some hundred nm to 1 μm in radius. [ 35 ] In more recent work, the tip radius of curvature was mentioned as 10–40 nm. [ 15 ] [ 16 ] [ 18 ] [ 66 ] By reducing the radius of curvature of the tip, it allows for the enhanced detection of deviations within the OPVs surface morphology. Tips often need to be replaced due to tip rounding, which leads to a decrease in the resolution. [ 32 ] Tip rounding occurs due to the loss of outermost atoms present at the apex of the tip which can be a result of excessive force applied or character of the sample. [ 32 ]
Because of the extremely small radius of the AFM tip, the illumination source is allowed to be focused tighter, thus increasing its efficiency. Typical arrangements for pc-AFM contain a low powered, 532 nm laser (2–5 mW) whose beam is reflected off mirrors located beneath the scanning stage. [ 12 ] [ 13 ] [ 14 ] [ 15 ] [ 16 ] [ 18 ] [ 20 ] Through the use of a charge-coupled device (CCD), the tip can easily be positioned directly over the laser spot. [ 66 ] Xenon arc lamps have also been widely used as illumination sources, but are atypical in recent work. [ 17 ] In a study by Coffey et al., lasers of two different wavelengths (532 nm and 405 nm) are irradiated onto the same sample area. [ 18 ] With this work, they have shown images with identical contrast which proves that the photocurrent variations are less related to spatial absorbance variation. [ 18 ]
Most sampling procedures often begin by obtaining the dark current images of the sample. Dark current is referred to as the photocurrent generation created by the OPV in the absence of an illumination source. The cantilever and tip are simply rastered across the sample while topographic and current measurements are obtained. This data can then be used as a reference to determine the impact the illumination process exhibits on the OPV. Short circuit measurements are also commonly performed on the OPV devices. This consists of engaging the illumination source at open current (that is applied potential to the sample is zero). Nguyen and workers noted that a positive photocurrent reading correlated to the conduction of holes, while a negative reading correlated to the conduction of electrons. [ 67 ] This alone allowed the authors to make predictions regarding the morphology within the cell. The current density for the forward and reverse bias can calculated as follows: [ 17 ]
Current density equation:
J = 8 9 ε o ε r μ V 3 L 3 {\displaystyle J={\frac {8}{9}}\varepsilon _{o}\varepsilon _{r}\mu {\frac {V^{3}}{L^{3}}}}
where J is the current density, ε o is the permittivity of a vacuum, ε r is the relative permeability of the medium, μ is the mobility of the medium, V is the applied bias and L is the film thickness in nanometers. [ 67 ] The majority of the organic materials have relative permeability values of ~3 in their amorphous and crystalline states. [ 47 ] [ 69 ] [ 68 ]
The range of bias commonly applied is usually limited to between −5 V to +5 V for most studies. [ 7 ] [ 13 ] [ 14 ] [ 15 ] [ 16 ] [ 18 ] [ 20 ] [ 55 ] This can be achieved by applying a forward bias or reverse bias to the sample through the spotted gold contact. By adjusting this bias, along with the current passing through the cantilever, one can adjust the repulsive/attractive forces between the sample and the tip. When a reverse bias is applied (tip is negative relative to the sample), the tip and the sample experience attractive forces between them. [ 16 ] This current density measurement is then combined with the topographical information previously gathered from the AFM tip and cantilever. The resulting image displays the local variations in morphology with the current density measurements superimposed onto of them.
Several methods have been employed to help reduce both mechanical and acoustic vibrations within the system. Mechanical vibrations are mainly attributed to traffic in and out of a building Other sources of mechanical vibrations have often been seen in the higher stories of a building due to reduced damping from building supports. This source of vibrational noise is easily controlled through the use of a vibration isolation table. Acoustical vibrations are far more common than mechanical vibrations. This type of vibration is a result of air movement near the instrument such as fans or human voices. Several methods have been developed to help reduce this source of vibration. An easy solution for this is separating the electronic components from the stage. The reason for this separation of components is due to the cooling fans within the electrical devices. While operating, the fans lead to a constant source of vibrational noise within the system. In most cases, other methods still need to be employed to help reduce this source of noise. For instance, the instrument can be placed within a sealed box constructed of acoustic dampening material. Smaller stages also result in less surface area for acoustic vibrations to collide with, thus reducing the noise recorded. A more in depth solution consists of removing all sharp edges on the instrument. These sharp edges can excite resonances within the piezo-electric materials which increase the acoustic noise within the system. [ 58 ] | https://en.wikipedia.org/wiki/Photoconductive_atomic_force_microscopy |
Photoconductive polymers absorb electromagnetic radiation and produce an increase of electrical conductivity . Photoconductive polymers have been used in a wide variety of technical applications such as Xerography (electrophotography) and laser printing . Electrical conductivity is usually very small in organic compounds . Conductive polymers usually have large electrical conductivity. Photoconductive polymer is a smart material based on conductive polymer, and the electrical conductivity can be controlled by the amount of radiation.
The basic parameters of photoconductivity are the quantum efficiency of carrier generation( Υ {\displaystyle \Upsilon } ), the carrier mobility ( μ {\displaystyle \mu } ), electric field (E), temperature(T), and concentration(C) of charge carriers. The intrinsic properties of photoconductive polymers are the quantum efficiency ( Υ {\displaystyle \Upsilon } ) and carrier mobility( μ {\displaystyle \mu } ), which will determine the photocurrent . Photocurrent will be affected by these four kinds of processes: charge-carrier generation , charge injection , charge trapping , charge carrier transport .
Hundreds of photoconductive polymers have been disclosed in patents and literature. [ 1 ] There are mainly two types of photoconductive polymer: negative photoconductive polymers and magnetic photoconductive polymers.
Photoconductivity is an optical and electrical phenomenon, which material's electrical conductivity increase by absorption of electromagnetic radiation (e.g. visible light, ultraviolet light, infrared light). Photoconductive polymers can serve as good insulators when the electricity, free electrons and holes are absent.
In general, the polymers usually satisfy these two features.
1. Photoconductive polymers can absorb light to excite electrons from ground state to excited state. The photoexcited electron will form a pair of charge carriers, it can be separated by electric field.
2. Photoconductive polymers must allow migration of either photoexcited electrons or holes, or both, through the polymer in the electric field towards the appropriate electrodes.
Photoconductive polymers act merely as charge-transporting media, and it can be p-type or n-type , however most known photoconductive polymers are p-type (only transport holes). Photocurrents usually observed are very small in organic compounds. The mobilities μ are typically 10 −12 -10 −18 m 2 V −1 s −1 . And photocurrents are usually effected by charge-carrier generation, injection and transport.
Photoconductive polymers have been developed into different types, there are two mainly types, one is negative photoconductivity, another one is magnetic photoconductivity. The photoconductive polymers have been greatly enriched the photoconductive material, and there are many applications (e.g. xerography, laser printers)
Some materials exhibit decrease in photoconductivity upon exposure to illumination. One prominent example is hydrogenated amorphous silicon in which a metastable reduction in photoconductivity is observable. [ 2 ] Other materials that were reported to exhibit negative photoconductivity include molybdenum disulfide, [ 3 ] graphene, [ 4 ] and metal nanoparticles. [ 5 ]
When light is absorbed by a material, the number of free electrons and electron holes increases and raises its electrical conductivity. [ 6 ] To cause excitation, the light that strikes to the materials must have enough energy to raise electrons across the band gap, or to excite the impurities within the band gap. And this process will involve four kinds of processes: charge-carrier generation, charge injection, charge trapping, charge carrier transport.
The charge-carrier generation can be affected in different aspects: photons absorbed, polymer itself, photoexcitation of photosensitive material. The mechanism for intrinsic photogeneration is as illustrated. [ 7 ]
As Onsager originally developed this theory: [ 8 ]
The encounter complex will be formed by photoexcitation with migration of the exciton to an acceptor site. The photogeneration efficiency is determined by the competition between carrier separation and geminate recombination . The photogeneration efficiency was defined by using the dissociation of ion pairs in weak electrolytes , which can be expressed as a function of electric field, temperature and the separation distance of the bound hole-electron pair. [ 9 ] The overall photogeneration efficiency ϕ ( E ) {\displaystyle \phi (E)} can be given by
ϕ ( E ) = ϕ 0 ∫ p ( r , Θ , E ) g ( r , Θ ) d 3 r {\displaystyle \phi (E)=\phi _{0}\int p(r,\Theta ,E)g(r,\Theta )d^{3}r}
d 3 r {\displaystyle d^{3}r} is a volume element, ϕ {\displaystyle \phi } is the primary quantum yield, p ( r , Θ , E ) {\displaystyle p(r,\Theta ,E)} is the probability that a hole-electron pair separated by a distance r {\displaystyle r} at an angle Θ {\displaystyle \Theta } to the direction of electric field E {\displaystyle E} , g ( r , Θ ) {\displaystyle g(r,\Theta )} is the spatial distribution function between ions.
Efficient injection of charge into the layer plays an important role in the operation with a photogeneration layer.
Under quasi-steady state conditions, it can be written by the flowing equation:
d n d t = ϕ I − γ r n 2 − γ i n ≈ 0 {\textstyle {\frac {dn}{dt}}=\phi I-\gamma _{r}n^{2}-\gamma _{i}n\thickapprox 0}
ϕ I {\displaystyle \phi I} is the rate of the incident photons that are absorbed in the photogeneration, γ r n 2 {\displaystyle \gamma _{r}n^{2}} is the rate of the density of free carriers in the generation layer reduced by recombination, γ i n {\displaystyle \gamma _{i}n} is the injection rate.
Assuming that the charges cross the interface will not return, the photoinjection efficiency Υ {\displaystyle \Upsilon } can be defined as [ 10 ]
Υ = γ i n I = δ [ ( 1 + 2 ϕ δ ) 1 2 − 1 ) ] {\displaystyle \Upsilon ={\frac {\gamma _{i}n}{I}}=\delta [(1+{\frac {2\phi }{\delta }})^{\frac {1}{2}}-1)]} , when δ = γ i 2 / 2 I γ r {\displaystyle \delta =\gamma _{i}^{2}/2I\gamma _{r}}
(i)For a large δ {\displaystyle \delta } or low recombination rates, Υ = 0 {\displaystyle \Upsilon =0} , in which case the photoinjection efficiency is determined by the generation efficiency.
(ii)For a small δ {\displaystyle \delta } or high recombination rates, Υ = ( 2 δ ϕ ) 1 2 {\displaystyle \Upsilon =(2\delta \phi )^{\frac {1}{2}}} and the photoinjection efficiency will depend on the injection rate, γ i {\displaystyle \gamma _{i}} .
Charge transport can be defined as the process that photogenerated charge in photoconductors be injected into the transport material. When charges injected, charges will migrate through the medium and then reach to the opposite electrode. In this process, electrons or holes or both, involves 'hopping', for example, a sequence of transfers of charges among localized sites. [ 11 ] These localized sites are connected with individual functional groups or segments of the polymer chain.
Generally, hole injection, or transfer of holes into the transporting media. This process can be regarded as an oxidation step with cation radicals generate. Meanwhile, electron injection is a reduction process.
Based on the properties of charge transport, the photoconductive polymers usually satisfy one of the features:
(1)Photoconductive polymers are σ-conjugated .
(2)Photoconductive polymers have an extended π-electron system in the backbone pendant to the chain.
These features guarantee delocalization and stabilize the transport charge.
Charge trapping is an important processes, in which migrating charges can be immobilized in trap sites. If the traps are 'shallow', they may be referred to as 'transportinteractive'. [ 12 ] Hole trapping materials usually have lower oxidation potentials and work as host-transporting materials. Stronger electron acceptor have better ability to trap transport electron.
Charges can be immobilized by redox-irreversible side-reactions due to the geminate recombination and recombination of carriers in the circuit. In this process, charged moiety can be illustrated by the scheme:
(a) The redox steps to achieve trap-free migration of a hole involving neutral groups M and charged groups M+
(b) The intermittent species M j + can undergo two kinds of process:
(i) The migration of electron from M k will result in forming M j coming from M j +
(ii) M j + undergo a side-reaction leading to a charged species X + that won't further exchange the charge with the neighboring group M. [ 13 ]
There are some parameters in photoconductive polymers: quantum efficiency of photogeneration ϕ {\displaystyle \phi } , the carrier mobility μ {\displaystyle \mu } and the injection efficiency Υ {\displaystyle \Upsilon } . These parameters can not be got in steady-state measurements, ϕ {\displaystyle \phi } and μ {\displaystyle \mu } are very important parameters in the expression of photoconductivity, they are obtained from independent experiments. [ 14 ]
Transient techniques, [ 15 ] time-of-flight (TOF) [ 16 ] and xerographic discharge [ 17 ] are conventional transient techniques which are used to determine the parameters of photoconductive polymers. And they all need to be done under non-injecting contacts.
The charges will be generated in the region, which closed to the electrode the incident photos are absorbed. In order to avoid the migrating charges as a current pulse, RC have a smaller value than t T r {\displaystyle {t_{Tr}}} (RC< t T r {\displaystyle {t_{Tr}}} , R: resistance, C: capacitance and t T r {\displaystyle {t_{Tr}}} : transit time of the charges). The signal is a rectangle with an amplitude i 0 {\displaystyle i_{0}} without excessive charge dispersion and can be expressed as below:
i 0 = e ϕ N t T r {\displaystyle i_{0}={\frac {e\phi N}{t_{Tr}}}} , where e {\displaystyle e} is the electronic charge and N is the number of absorbed photos
And the current will close to 0 when the charges reach to the electrode, so the carrier mobility can be expressed as below:
μ = L E t T r {\displaystyle \mu ={\frac {L}{Et_{Tr}}}} , where L {\displaystyle L} is the thickness of the film.
In the xerographic technique, the corona-deposited charge plays the same role as the semitransparent electrode. The potential difference is monitored by a coupled probe. In the absence of charge trapping, the rate of potential decay has the form:
d V d t ∈ I ′ ϕ C {\displaystyle {\frac {dV}{dt}}{\boldsymbol {}}\in {\frac {I'\phi }{C}}} , where C {\displaystyle C} is capacitance and I ′ {\displaystyle I'} is the number of absorbed photons per unit area per unit time.
By measuring the decay rate of the potential, μ {\displaystyle \mu } and ϕ {\displaystyle \phi } can be obtained, respectively.
The photoconductive polymer have been successfully applied in Xerography and laser printers. They used the layered organic photoconductive polymer with a polymeric charge-transport layer. The charge-transport layer is a solid solution compared with other printer that usually use liquid chemicals in printing process. The main advantages of organic photoconductive polymer are (i) near-IR sensitivity (ii) panchromaticity (iii)flexibility for application (iv) simple fabrication (v) low cost. Currently, the best organic photoconductive polymer are as sensitive as the inorganic devices based on selenium.
There is some potential application in photovoltaic cells. The limit of this application is that photoconductive polymer don't have high conversion efficiency.
Some possible applications are just reported by the literature but no commercial products. They are photothermoplastic imaging, [ 18 ] holographic recording [ 19 ] and optical switching devices. [ 20 ]
Xerography or electrophotography is a photocopying technique. Its fundamental principle was invented by Chester Carlson in 1938 and developed and commercialized by the Xerox Corporation, which is used for high-quality printing. [ 21 ] To begin with, the technique was called electrophotography, then it was renamed to xerography. In traditional reproduction techniques, liquid chemicals are involved in printing process. Xerography use photoconductive polymer as the foundation material, which is solid chemicals. [ 22 ]
Carlson's innovation combined electrostatic printing with photography, unlike the electrostatic printing process invented by Georg Christoph Lichtenberg in 1778. Carlson's original process requires several manual processing steps with flat plates. It was almost 18 years before a fully automated process was developed, the key breakthrough being use of a cylindrical drum coated with selenium instead of a flat plate. This resulted in the first commercial automatic copier(Xerox 914) [ 23 ] in 1960.
Before 1960, Carlson had proposed his idea to more than a dozen companies, but none was interested. Xerography is now used in most photocopying machines, laser and LED printers. [ 24 ]
Laser printing is an electrostatic digital printing process. [ 25 ] It produces high-quality text and graphics by repeatedly passing a laser beam back and forth over a negatively charged cylinder called a "drum" to get a charged image. [ 26 ] The drum can selectively collect electrically charged powdered ink (toner), and transfers the image to paper.
As digital photocopiers , laser printers employ a xerographic printing process. However, laser printing differs from analog photocopiers. Because the image is produced by direct scanning of the medium across the printer's photoreceptor, which enables laser printing to copy images more quickly than most photocopiers. [ 27 ]
The first laser printer was invented by Xerox PARC in the 1970s. Laser printers were introduced for the office and then home markets in subsequent years by IBM , Canon , Xerox, Apple, Hewlett-Packard and others. [ 28 ] Over the decades, quality and speed have increased as the price fall, and the once cutting-edge printing devices are now ubiquitous. [ 29 ] | https://en.wikipedia.org/wiki/Photoconductive_polymer |
Photoconductivity is an optical and electrical phenomenon in which a material becomes more electrically conductive due to the absorption of electromagnetic radiation such as visible light , ultraviolet light, infrared light, or gamma radiation . [ 1 ]
When light is absorbed by a material such as a semiconductor , the number of free electrons and holes increases, resulting in increased electrical conductivity . [ 2 ] To cause excitation, the light that strikes the semiconductor must have enough energy to raise electrons across the band gap , or to excite the impurities within the band gap. When a bias voltage and a load resistor are used in series with the semiconductor, a voltage drop across the load resistors can be measured when the change in electrical conductivity of the material varies the current through the circuit.
Classic examples of photoconductive materials include:
Molecular photoconductors include organic, [ 6 ] inorganic, [ 7 ] and – more rarely – coordination compounds. [ 8 ] [ 9 ]
When a photoconductive material is connected as part of a circuit, it functions as a resistor whose resistance depends on the light intensity . In this context, the material is called a photoresistor (also called light-dependent resistor or photoconductor ). The most common application of photoresistors is as photodetectors , i.e. devices that measure light intensity. Photoresistors are not the only type of photodetector—other types include charge-coupled devices (CCDs), photodiodes and phototransistors —but they are among the most common. Some photodetector applications in which photoresistors are often used include camera light meters, street lights, clock radios, infrared detectors , nanophotonic systems and low-dimensional photo-sensors devices. [ 10 ]
Sensitization is an important engineering procedure to amplify the response of photoconductive materials. [ 3 ] The photoconductive gain is proportional to the lifetime of photo-excited carriers (either electrons or holes). Sensitization involves intentional impurity doping that saturates native recombination centers with a short characteristic lifetime, and replacing these centers with new recombination centers having a longer lifetime. This procedure, when done correctly, results in an increase in the photoconductive gain of several orders of magnitude and is used in the production of commercial photoconductive devices. The text by Albert Rose is the work of reference for sensitization. [ 11 ]
Some materials exhibit deterioration in photoconductivity upon exposure to illumination. [ 12 ] One prominent example is hydrogenated amorphous silicon (a-Si:H) in which a metastable reduction in photoconductivity is observable [ 13 ] (see Staebler–Wronski effect ). Other materials that were reported to exhibit negative photoconductivity include ZnO nanowires , [ 14 ] molybdenum disulfide , [ 15 ] graphene , [ 16 ] indium arsenide nanowires , [ 17 ] decorated carbon nanotubes, [ 18 ] and metal nanoparticles . [ 19 ]
Under an applied AC voltage and upon UV illumination, ZnO nanowires exhibit a continuous transition from positive to negative photoconductivity as a function of the AC frequency. [ 14 ] ZnO nanowires also display a frequency-driven metal-insulator transition at room temperature. The responsible mechanism for both transitions has been attributed to a competition between bulk conduction and surface conduction. [ 14 ] The frequency-driven bulk-to-surface transition of conductivity is expected to be a generic character of semiconductor nanostructures with the large surface-to-volume ratio .
In 2016 it was demonstrated that in some photoconductive material a magnetic order can exist. [ 20 ] One prominent example is CH 3 NH 3 (Mn:Pb)I 3 . In this material a light induced magnetization melting was also demonstrated [ 20 ] thus could be used in magneto optical devices and data storage.
The characterization technique called photoconductivity spectroscopy (also known as photocurrent spectroscopy ) is widely used in studying optoelectronic properties of semiconductors. [ 21 ] [ 22 ] | https://en.wikipedia.org/wiki/Photoconductivity |
A photocyte is a cell that specializes in catalyzing enzymes to produce light ( bioluminescence ). [ 1 ] Photocytes typically occur in select layers of epithelial tissue, functioning singly or in a group, or as part of a larger apparatus (a photophore ). They contain special structures called photocyte granules. These specialized cells are found in a range of multicellular animals, including coelenterates ( cnidarians and ctenophores ), annelids , arthropods (including insects ) and fishes . Although some fungi are bioluminescent, they do not have such specialized cells. [ 1 ]
Nerve impulses may first trigger light production by stimulating the photocyte to release the enzyme luciferase into a "reaction chamber" of luciferin substrate. In some species, the release occurs continually without the precursor impulse via osmotic diffusion . Molecular oxygen is then actively gated through surrounding tracheal cells which otherwise limit the natural diffusion of oxygen from blood vessels; the resulting reaction of oxygen gas with the luciferase and luciferin produces light energy and a byproduct (usually carbon dioxide ). [ 1 ] The reaction occurs in the peroxisome of the cell. [ 2 ]
Researchers once postulated that ATP was the source of reaction energy for photocytes, but since ATP only produces a fraction of the energy of the luciferase reaction, any resulting light wave-energy would be too small for detection by a human eye. The wavelengths produced by most photocytes fall close to 490 nm, although light as energetic as 250 nm is reportedly possible. [ 1 ]
The variations of color seen in different photocytes are usually the result of color filters in other parts of the photophore that alter the wavelength of the light prior to exiting the endoderm . The range of colors varies between bioluminescent species.
The exact combinations of luciferase and luciferin types found among photocytes are specific to the species to which they belong. This appears to be the result of consistent evolutionary divergence. [ 1 ]
Light production in Photurius pennsylvanica larvae occurs in the roughly 2,000 photocytes located in the heavily innervated light organ of the insect which is much simpler than that of the adult organism. [ 3 ] The transparent photocytes of the larvae are clearly distinguishable from the opaque dorsal layer cells that cover them. Nervous and intracellular mechanisms contribute to light production in the photocytes. Nervous and intracellular mechanisms contribute to light production in the photocytes. It has been shown that fireflies can modify the amount of oxygen that travels through their trachea system to the light organ which plays a role in oxygen availability for light production. They do this by modifying the amount of fluid present within the trachea system. Because oxygen diffuses more slowly through water than in a gaseous form, this allows fireflies to effectively change the amount of oxygen reaching the photocytes. [ 4 ] Spiracles can be opened and closed to control the amount of air that is able to pass through the tracheal system, but this control mechanism is only used as a response to a stressor. [ 5 ]
Research has shown that applying 5 to 15 volts of electricity for 50 ms to the segmental nerve that innervates the light organ leads to a glow 1.5 seconds after that lasts for five to ten seconds. Stimulation of the segmental nerve has been found to lead to several different nerve impulses, and frequency of nervous impulses has been found to be proportional to the intensity of the stimulus applied. A high frequency of nervous impulse was found to lead to a constant latency. The light organ is inactive in the absence of nerve impulses. Constant nerve signaling was shown to coincide with constant emission of light from the light organ with a higher frequency coinciding with a higher amplitude of light emitted up to 30 impulses per second. Impulses beyond this frequency were not found to be associated with a more intense glow. The fact that the frequency of nerve impulses was able to exceed beyond the maximum intensity of light emission suggests some limitations in the mechanism either arising from the synapse or the cell's light producing process. Additionally, a series of action potentials have been shown to lead to the sporadic, discontinuous emission to light. It was also found that a higher frequency of action potentials lead to a higher likelihood of any emission of light. Nerve impulses are associated with a depolarization of the photocyte which plays a role in its light emitting mechanism, and greater depolarization events were found to be associated with more intense lightning. The nerve innervating the light organ containing photocytes has only two axons , but they branch repeatedly allowing the numerous photocytes to be innervated with each cell being associated with several nerve terminals with each terminal possibly being associated with several synapses. [ 3 ]
It was found that the junction between at the end of the neuron innervating the light organ differs from the kind of junction found between two different neurons or between neurons and muscles in the neuromuscular junction . The depolarization of the photocyte following nervous stimulation was found to be one-hundred times slower than the with the other two kinds of junctions and this slow response cannot be attributed to the rate of diffusion because the synapse between the neuron and photocyte is relatively small. [ 3 ] It has been found that the neurons that control the light mechanism terminate at the tracheal cells rather than the photocytes themselves. [ 4 ]
The resting potential of photocytes was found to exist in a range between 50 and 65 millivolts. It is generally accepted that the emission of light was found to occur after depolarization of the photocyte membrane although some have argued that the depolarization follows the emission of light. The depolarization of the membrane results in an increase of the rate of diffusion of ions across it. The depolarization of the photocyte was found to occur 0.5 seconds following nervous impulse culminating at one second with the maximum degree of depolarization observed. A higher frequency of nervous stimulation was associated with a smaller depolarization event. Exposure to neurotransmitters including epinephrine , norepinephrine , and synephrine , results in the emission of light but without any corresponding depolarization of the photocyte membrane. [ 3 ]
Photocytes are found distributed unevenly near the plate cilia cells. Gastric cells form a barrier that keep the photocytes away from the opening of the radially canal which they are found to exist along. [ 6 ]
Light production in Porichthys notatus has been found to be triggered through an adrenergic mechanism. The sympathetic nervous system of the fish is responsible for triggering bioluminescence in the photocytes. As a response to being triggered by an norepinephrine , epinephrine , or phenylephrine , the photocyte exhibits a quick flash and then emits light that slowly fades in intensity. Stimulation by isoproterenol was found to cause an only a slow fading illumination. The amplitude of the quick flash, referred to as the "fast response", was higher when the concentration of neurotransmitter stimulating it increased. A great dal of variation in luminescence was exhibited in the photocytes of different fish. Variation also existed depending on what time of year the photocytes were collected from the fish. Stimulation from phenylephrine was found to produce a less intense response than that of epinephrine or norepinephrine. Phentolamine was shown to inhibit the effect of stimulation by phenylephrine completely and of epinephrine and norepinephrine to a lesser degree. Clonidine was shown to have an inhibitory effect on the fast response but no effect on the slow response. [ 7 ] The photocytes of Porichthys are known to be extensively innervated.
Mechanical stimulation to spines on the arm can cause Amphiura filiformis to bioluminesce in the blue range. The species has been found to possess a luciferase compound. The luciferase has been isolated to clusters of photocytes that exist at the tip off the arms and around the spines. What are believed to be photocytes based on evidence have been found around the spine nerve plexus, mucous cells, and what are believed to be pigment cells. It has been found that luminescence is controlled by the animal's nervous system. Acetylcholine is able to stimulate the cells through nicotinic receptors . [ 8 ]
In Amphipholis squamata , bioluminescence has been observed to come from the spines emanating from the arms from photocytes within the spinal ganglia. Acetylcholine has been found to be able to stimulate the photocytes to produce light. [ 9 ]
It was discovered that bioluminescent snails are able to exercise a great deal of control over light emission, but the way in which they exercise control over it is still unknown. Phuphania have even been shown to be able to preserve their ability to produce light even after long periods of hibernation. It is currently unknown how these snails are able to maintain their ability to produce light for long periods of time, but theories have been proposed possibly relating it to the way certain fungi are able to maintain their bioluminescence. [ 10 ]
Adrenaline stimulates photocytes to emit light for many species of fish. It is believed that sympathetic nervous impulses provide the stimulus that causes photocytes to emit light. [ 11 ]
For Mnemiopsis leidyi , the ability to produce light is first observed upon the development of the plate cilia cells, and the bioluminescent cells found in the embryo share many characteristics with the photocytes observed in the adult organism. The M macromere lineage of cells are the ones that differentiate into photocytes, and they separate from other lineages of cells in the differential division. The subsequent maturation of the photocytes and intensification of light produced develop rapidly, occurring within ten hours of the first observed instance of bioluminescence. The egg of the organism contains two cytoplasmic regions: cortical and yolky, and the region of cytoplasm that daughter cells receive when the egg divides determine what they differentiate into. It was found that whether cortical cells exhibited bioluminescence or not was dependent on whether they inherited yolk in their cytoplasm with the cells containing yolk producing light and the cells without yolk not producing any light. [ 6 ]
Luciferins have been shown to be largely conserved among different species while luciferases show a greater degree of diversity. Eighty percent of the species that exhibit bioluminescence exist in aquatic habitats. [ 12 ]
Overall, the evolution of light producing cells (photocytes) is believed to have happened twice in sharks through convergence . Evidence suggests that the bioluminescent properties of the shark, Etmopterus spinax , came about as a mechanism of camouflage . It is thought that luminescence has other functions as well due to camouflage not being a logical explanation for the luminescence on the lateral sides of the shark. [ 13 ] Bioluminescence is believed to have only evolved in sharks among the cartilaginous fishes . The function of bioluminescence among sharks has not been fully ascertained. [ 12 ]
All five families of luminescent beetle, Phengodidae , Rhagophthalidae , Elateridae , Sinopyrophoridae, and Lampyridae are categorized into the Lampyroid clade . It has been determined that the luciferases and luciferin protein expressed in the photocytes of all species of firefly is homologous with that expressed in beetle species within the families Phengodidae , Rhagophthalidae , and Elateridae . In fact, every bioluminescent beetle species studied has been shown to use very similar mechanisms for light production in the photocyte. The beetle genus, Sinopyrophoridae , has been shown to exhibit bioluminescence although the exact mechanism is not known. It is believed that it shares homology with other genera of beetles however. The first time the entire genome of a bioluminescent beetle was determined was in 2017 with Pyrocoelia pectoralis, a species of firefly, and in 2018, three more species of bioluminescent beetle had their genomes sequenced. Biolumiescence in beetles has been shown to serve multiple purposes including the deterrence of predators and the attraction of mates. [ 2 ]
The variation in coloring among different species of firefly has been determined to be due to differences in the amino acid sequences of the luciferases expressed in their photocytes. Two luciferase genes have been identified in the genomes of fireflies. They are luc1-type and luc2-type. There is evidence that suggests that Luc1-type evolved from a gene duplication of the gene that encodes for acyl-CoA synthetase . It is hypothesized that the luciferase of click beetles evolved separately from that in fireflies being the result of two gene duplications of the acyl-CoA synthetase gene suggesting analogy instead of homology between the groups. Additional genes have been found to be related to the storage of luciferin. [ 2 ]
Bioluminescence in Amphiura filiformis and other species of sea star is widely believed to function in protection against predators. By attracting predators to one arm and losing the arm, the sea star is able to escape predation. [ 8 ]
Fish generally use bioluminescence for camouflage to hide from predators. Endogenous photocytes are more commonly used for bioluminescence than other means like bacteria. Some fish may use the bioluminescence produced by their photocytes as a means of communication. [ 14 ]
Bioluminescence has only been observed in three classes of mollusks : Cephalopoda , Gastropoda , and Bivalvia . Bioluminescence is widely spread among cephalopods, but much rarer among the other classes of mollusk. Most species of bioluminescent mollusk that have been discovered are found in the ocean with the exception of the genera Latia and Quantula found in freshwater and terrestrial habitats respectively; however, more recent research has discovered luminescence in the Phuphania genus. It is hypothesized that terrestrial mollusks that use bioluminescence developed it as a strategy to deter predation. The green color emanated by the mollusk's photocytes is thought to be the most visible color to nocturnal predators. [ 10 ]
The mitochondria is believed to be important in controlling the supply of oxygen available for making light in fireflies. An increased rate of respiration decreases the intracellular oxygen concentration which reduces the amount available for light production. [ 4 ] The mitochondria of the photocyte exists near the perimeter of the cell while the peroxisome is typically found closer to the middle of the cell. [ 5 ] It is worth noting that not all bioluminescence in the firefly light organ occurs in the granules of the photocyte. Some fluorescent protein has been found to exist in the posterior region of the organ. [ 15 ]
It was found that the luciferase enzyme produced in fireflies is localized to the peroxisome within the photocytes. When mammalian cells were modified to produce the enzyme, it was found that they were targeted to the mammalian peroxisome as well. Because protein targeting to peroxisomes is not well understood, this finding is valuable for its potential to aid in the determination of peroxisome targeting mechanisms. If the cell produces a large amount of luciferase, some of the protein ends up in the cytoplasm. It is unknown what feature of the luciferase enzyme causes it to be targeted to the peroxisome since no particular protein sequences related to peroxisome targeting have been discovered. [ 16 ]
The photocyte of Arachnocampa luminosa was found to contain a circular nucleus, and large amounts of ribosomes , smooth endoplasmic reticulum , mitochondria , and microtubules . Instead of having photocyte granules, the photocytes of the organism were shown to undergo the luciferase reaction in their cytoplasm . The cells do not have a golgi apparatus or rough endoplasmic reticulum and were found to be 250 micrometers by 120 micrometers overall with a depth of 25 to 30 micrometers. [ 17 ]
The photocytes of Renilla köllikeri were found to have a diameter of eight to ten micrometers. The mitochondria of the photocytes were found to be very large with abnormally organized cristae surrounding the nucleus of the cell. The rough endoplasmic reticulum of the photocytes were found to exist close to the cell membrane. Several small vesicles, on the order of 0.25 micrometers, were found in the cell, and differently shaped granules containing diverse contents were also observed. [ 18 ]
The photocytes present in Amphipholis squamata have been found to contain a Golgi apparatus and rough endoplasmic reticulum. They have also been found to contain up to six different kinds of vesicles within their cytoplasm. [ 9 ]
Signal transduction pathways in the photocyte of the firefly have been hypothesized to play a role in decreasing the activity of the mitochondria to make oxygen available for the production of light in fireflies. Because the neurons that control the lighting mechanism of the photocytes terminate at the tracheal cells instead of the photocytes, there must be some process that mediates the transference of the signal to them. Nitric oxide is believed to play this role partly due to the fact that it has already been implicated in a plethora of signaling roles in tissues among several, diverse clades of animal including insects. In fact, concentrations of nitric oxide on the order of 70 ppm have been found to result in flashing in fireflies, and carboxy-PTIO, a Nitric Oxide scavenger, has been shown to inhibit the response. Additionally, the tracheolar end organ was found to contain a high concentration of the enzyme nitric oxide synthase. Nitric oxide has been implicated with the action of decreasing respiration in the mitochondria. This effect on the mitochondria has been found to be influenced by surrounding light conditions with more light decreasing the action of nitric oxide on the mitochondria and less light increasing its action. In addition to ambient light, the light produced by the photocytes can also play an inhibitory role on the effect of nitric oxide. [ 4 ] The photocytes have been described as containing a vacuole that plays a role in signaling with the extracellular environment. [ 19 ] It has been found that octopamine triggers an adenylate cyclase which plays a role in triggering bioluminescence in the photocytes in fireflies. A reaction among D-luciferin, luciferase, and ATP has been implicated in the mechanism of light production in firefly photocytes. The fluorescent response was also found to be greater in basic conditions than in acidic conditions. [ 15 ]
The shape of the photocyte granules ranges from more round to more elliptical, and there are three types of photocyte granules. The bioluminescent reaction is confined to the granules. The granules range from 0.6 to 2.5 micrometers in the larval photocytes of Photuris pennsylvanica and between 2.5 and 4.5 micrometers in the adult photocytes of the Asiatic firefly. The size and shape of photocytes can exhibits a great deal of diversity among the species they are found in. The different types of granules have been observed together within individual photocytes. [ 19 ] The illumination of the photocytes is confined to the granules where the reaction occurs. [ 15 ]
The first type of photocyte granule has been found to contain between two and twelve microtubules. In addition, the matrix of the type I granule lacks a uniform shape or structure with ferritin distributed throughout. [ 19 ]
The second type of photocyte granule contains a large crystal surrounded by several small crystals within a matrix with no definite shape or form. T microtubules in the type two granules are associated with the face of the crystal. In addition ferritin has been found to be associated with the crystals. [ 19 ] Type II granules are hypothesized to exist in Amphiurus filiformis photocytes. [ 8 ]
The type III granules are characterized by the fact that they contain several tubules with thick walls. The ferritin present in the granules is associated with filament-like features contained in them. [ 19 ]
Because the compounds that exhibit bioluminescence are typically fluorescent, fluorescence can be used to identify photocytes in organisms. [ 10 ] | https://en.wikipedia.org/wiki/Photocyte |
Photodegradation is the alteration of materials by light. Commonly, the term is used loosely to refer to the combined action of sunlight and air , which cause oxidation and hydrolysis . Often photodegradation is intentionally avoided, since it destroys paintings and other artifacts. It is, however, partly responsible for remineralization of biomass and is used intentionally in some disinfection technologies. Photodegradation does not apply to how materials may be aged or degraded via infrared light or heat, but does include degradation in all of the ultraviolet light wavebands.
The protection of food from photodegradation is very important. Some nutrients, for example, are affected by degradation when exposed to sunlight. In the case of beer , UV radiation causes a process that entails the degradation of hop bitter compounds to 3-methyl-2-buten-1-thiol and therefore changes the taste. As amber-colored glass has the ability to absorb UV radiation, beer bottles are often made from such glass to prevent this process.
Paints, inks, and dyes that are organic are more susceptible to photodegradation than those that are not. Ceramics are almost universally colored with non-organic origin materials so as to allow the material to resist photodegradation even under the most relentless conditions, maintaining its color.
The photodegradation of pesticides is of great interest because of the scale of agriculture and the intensive use of chemicals. Pesticides are however selected in part not to photodegrade readily in sunlight in order to allow them to exert their biocidal activity. Thus, more modalities are implemented to enhance their photodegradation, including the use of photosensitizers, photocatalysts (e.g., titanium dioxide ), and the addition of reagents such as hydrogen peroxide that would generate hydroxyl radicals that would attack the pesticides. [ 1 ]
The photodegradation of pharmaceuticals is of interest because they are found in many water supplies. They have deleterious effects on aquatic organisms including toxicity, endocrine disruption, genetic damage. [ 2 ] But also in the primary packaging material the photodegradation of pharmaceuticals has to be prevented. For this, amber glasses like Fiolax amber and Corning 51-L are commonly used to protect the pharmaceutical from UV radiations. Iodine (in the form of Lugol's solution ) and colloidal silver are universally used in packaging that lets through very little UV light so as to avoid degradation.
Common synthetic polymers that can be attacked include polypropylene and LDPE , where tertiary carbon bonds in their chain structures are the centres of attack. Ultraviolet rays interact with these bonds to form free radicals , which then react further with oxygen in the atmosphere, producing carbonyl groups in the main chain. The exposed surfaces of products may then discolour and crack, and in extreme cases, complete product disintegration can occur.
In fibre products like rope used in outdoor applications, product life will be low because the outer fibres will be attacked first, and will easily be damaged by abrasion for example. Discolouration of the rope may also occur, thus giving an early warning of the problem.
Polymers which possess UV-absorbing groups such as aromatic rings may also be sensitive to UV degradation. Aramid fibres like Kevlar , for example, are highly UV-sensitive and must be protected from the deleterious effects of sunlight.
Many organic chemicals are thermodynamically unstable in the presence of oxygen; however, their rate of spontaneous oxidation is slow at room temperature. In the language of physical chemistry, such reactions are kinetically limited. This kinetic stability allows the accumulation of complex environmental structures in the environment. Upon the absorption of light, triplet oxygen converts to singlet oxygen , a highly reactive form of the gas, which effects spin-allowed oxidations. In the atmosphere, the organic compounds are degraded by hydroxyl radicals , which are produced from water and ozone. [ 3 ]
Photochemical reactions are initiated by the absorption of a photon, typically in the wavelength range 290–700 nm (at the surface of the Earth). The energy of an absorbed photon is transferred to electrons in the molecule and briefly changes their configuration (i.e., promotes the molecule from a ground state to an excited state ). The excited state represents what is essentially a new molecule. Often excited state molecules are not kinetically stable in the presence of O 2 or H 2 O and can spontaneously decompose ( oxidize or hydrolyze ). Sometimes molecules decompose to produce high energy, unstable fragments that can react with other molecules around them. The two processes are collectively referred to as direct photolysis or indirect photolysis , and both mechanisms contribute to the removal of pollutants.
The United States federal standard for testing plastic for photodegradation is 40 CFR Ch. I (7–1–03 Edition) PART 238.
Photodegradation of plastics and other materials can be inhibited with polymer stabilizers , which are widely used. These additives include antioxidants , which interrupt degradation processes. Typical antioxidants are derivatives of aniline . Another type of additive are UV-absorbers. These agents capture the photon and convert it to heat. Typical UV-absorbers are hydroxy-substituted benzophenones , related to the chemicals used in sunscreen . [ 4 ] Restoration of yellowed plastic of old toys [ 5 ] is nicknamed retrobright . | https://en.wikipedia.org/wiki/Photodegradation |
Photodisintegration (also called phototransmutation , or a photonuclear reaction ) is a nuclear process in which an atomic nucleus absorbs a high-energy gamma ray , enters an excited state, and immediately decays by emitting a subatomic particle. The incoming gamma ray effectively knocks one or more neutrons , protons , or an alpha particle out of the nucleus. [ 1 ] The reactions are called (γ,n), (γ,p), and (γ,α), respectively.
Photodisintegration is endothermic (energy absorbing) for atomic nuclei lighter than iron and sometimes exothermic (energy releasing) for atomic nuclei heavier than iron . Photodisintegration is responsible for the nucleosynthesis of at least some heavy, proton-rich elements via the p-process in supernovae of type Ib, Ic, or II.
This causes the iron to further fuse into the heavier elements. [ citation needed ]
A photon carrying 2.22 MeV or more energy can photodisintegrate an atom of deuterium :
James Chadwick and Maurice Goldhaber used this reaction to measure the proton-neutron mass difference. [ 2 ] This experiment proves that a neutron is not a bound state of a proton and an electron, [ why? ] [ 3 ] as had been proposed by Ernest Rutherford .
A photon carrying 1.67 MeV or more energy can photodisintegrate an atom of beryllium-9 (100% of natural beryllium, its only stable isotope):
Antimony-124 is assembled with beryllium to make laboratory neutron sources and startup neutron sources . Antimony-124 (half-life 60.20 days) emits β− and 1.690 MeV gamma rays (also 0.602 MeV and 9 fainter emissions from 0.645 to 2.090 MeV), yielding stable tellurium-124. Gamma rays from antimony-124 split beryllium-9 into two alpha particles and a neutron with an average kinetic energy of 24 keV (a so-called intermediate neutron in terms of energy): [ 4 ] [ 5 ]
Other isotopes have higher thresholds for photoneutron production, as high as 18.72 MeV, for carbon-12 . [ 6 ]
In explosions of very large stars (250 or more solar masses ), photodisintegration is a major factor in the supernova event. As the star reaches the end of its life, it reaches temperatures and pressures where photodisintegration's energy-absorbing effects temporarily reduce pressure and temperature within the star's core. This causes the core to start to collapse as energy is taken away by photodisintegration, and the collapsing core leads to the formation of a black hole . A portion of mass escapes in the form of relativistic jets , which could have "sprayed" the first metals into the universe. [ 7 ] [ 8 ]
Terrestrial lightnings produce high-speed electrons that create bursts of gamma-rays as bremsstrahlung . The energy of these rays is sometimes sufficient to start photonuclear reactions resulting in emitted neutrons. One such reaction, 14 7 N (γ,n) 13 7 N , is the only natural process other than those induced by cosmic rays in which 13 7 N is produced on Earth. The unstable isotopes remaining from the reaction may subsequently emit positrons by β + decay . [ 9 ]
Photofission is a similar but distinct process, in which a nucleus, after absorbing a gamma ray, undergoes nuclear fission (splits into two fragments of nearly equal mass). | https://en.wikipedia.org/wiki/Photodisintegration |
Photodissociation , photolysis , photodecomposition , or photofragmentation is a chemical reaction in which molecules of a chemical compound are broken down by absorption of light or photons . It is defined as the interaction of one or more photons with one target molecule that dissociates into two fragments. [ 1 ]
Here, “light” is broadly defined as radiation spanning the vacuum ultraviolet (VUV) , ultraviolet (UV) , visible , and infrared (IR) regions of the electromagnetic spectrum . To break covalent bonds , photon energies corresponding to visible, UV, or VUV light are typically required, whereas IR photons may be sufficiently energetic to detach ligands from coordination complexes or to fragment supramolecular complexes. [ 2 ]
Photolysis is part of the light-dependent reaction , light phase, photochemical phase, or Hill reaction of photosynthesis . The general reaction of photosynthetic photolysis can be given in terms of photons as:
The chemical nature of "A" depends on the type of organism . Purple sulfur bacteria oxidize hydrogen sulfide ( H 2 S ) to sulfur (S). In oxygenic photosynthesis, water ( H 2 O ) serves as a substrate for photolysis resulting in the generation of diatomic oxygen ( O 2 ). This is the process which returns oxygen to Earth's atmosphere. Photolysis of water occurs in the thylakoids of cyanobacteria and the chloroplasts of green algae and plants. [ 3 ]
The conventional semi-classical model describes the photosynthetic energy transfer process as one in which excitation energy hops from light-capturing pigment molecules to reaction center molecules step-by-step down the molecular energy ladder.
The effectiveness of photons of different wavelengths depends on the absorption spectra of the photosynthetic pigments in the organism. Chlorophylls absorb light in the violet-blue and red parts of the spectrum, while accessory pigments capture other wavelengths as well. The phycobilins of red algae absorb blue-green light which penetrates deeper into water than red light, enabling them to photosynthesize in deep waters. Each absorbed photon causes the formation of an exciton (an electron excited to a higher energy state) in the pigment molecule. The energy of the exciton is transferred to a chlorophyll molecule ( P680 , where P stands for pigment and 680 for its absorption maximum at 680 nm) in the reaction center of photosystem II via resonance energy transfer . P680 can also directly absorb a photon at a suitable wavelength.
Photolysis during photosynthesis occurs in a series of light-driven oxidation events. The energized electron (exciton) of P680 is captured by a primary electron acceptor of the photosynthetic electron transport chain and thus exits photosystem II. In order to repeat the reaction, the electron in the reaction center needs to be replenished. This occurs by oxidation of water in the case of oxygenic photosynthesis. The electron-deficient reaction center of photosystem II (P680*) is the strongest biological oxidizing agent yet discovered, which allows it to break apart molecules as stable as water. [ 4 ]
The water-splitting reaction is catalyzed by the oxygen-evolving complex of photosystem II. This protein-bound inorganic complex contains four manganese ions , plus calcium and chloride ions as cofactors. Two water molecules are complexed by the manganese cluster, which then undergoes a series of four electron removals (oxidations) to replenish the reaction center of photosystem II. At the end of this cycle, free oxygen ( O 2 ) is generated and the hydrogen of the water molecules has been converted to four protons released into the thylakoid lumen (Dolai's S-state diagrams). [ citation needed ]
These protons, as well as additional protons pumped across the thylakoid membrane coupled with the electron transport chain, form a proton gradient across the membrane that drives photophosphorylation and thus the generation of chemical energy in the form of adenosine triphosphate (ATP). The electrons reach the P700 reaction center of photosystem I where they are energized again by light. They are passed down another electron transport chain and finally combine with the coenzyme NADP + and protons outside the thylakoids to form NADPH . Thus, the net oxidation reaction of water photolysis can be written as:
2 H 2 O + 2 NADP + + 8 photons ⟶ 2 NADPH + 2 H + + O 2 {\displaystyle {\ce {2H2O + 2NADP+}}+8{\text{ photons}}\longrightarrow {\ce {2NADPH + 2H+ + O2}}}
The free energy change ( Δ G {\displaystyle \Delta G} ) for this reaction is 102 kilocalories per mole. Since the energy of light at 700 nm is about 40 kilocalories per mole of photons, approximately 320 kilocalories of light energy are available for the reaction. Therefore, approximately one-third of the available light energy is captured as NADPH during photolysis and electron transfer. An equal amount of ATP is generated by the resulting proton gradient. Oxygen as a byproduct is of no further use to the reaction and thus released into the atmosphere. [ 5 ]
In 2007 a quantum model was proposed by Graham Fleming and his co-workers which includes the possibility that photosynthetic energy transfer might involve quantum oscillations, explaining its unusually high efficiency . [ 6 ]
According to Fleming [ 7 ] there is direct evidence that remarkably long-lived wavelike electronic quantum coherence plays an important part in energy transfer processes during photosynthesis, which can explain the extreme efficiency of the energy transfer because it enables the system to sample all the potential energy pathways, with low loss, and choose the most efficient one. This claim has, however, since been proven wrong in several publications. [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ]
This approach has been further investigated by Gregory Scholes and his team at the University of Toronto , which in early 2010 published research results that indicate that some marine algae make use of quantum-coherent electronic energy transfer (EET) to enhance the efficiency of their energy harnessing. [ 13 ] [ 14 ] [ 15 ]
Photoacids are molecules that upon light absorption undergo a proton transfer to form the photobase.
In these reactions, the dissociation occurs in the electronically excited state. After proton transfer and relaxation to the electronic ground state, the proton and acid recombine to form the photoacid again.
Photoacids are a convenient source to induce pH jumps in ultrafast laser spectroscopy experiments.
Photolysis occurs in the atmosphere as part of a series of reactions by which primary pollutants such as hydrocarbons and nitrogen oxides react to form secondary pollutants such as peroxyacyl nitrates . See Photochemical smog .
The two most important photodissociation reactions in the troposphere are firstly:
which generates an excited oxygen atom which can react with water to give the hydroxyl radical :
The hydroxyl radical is central to atmospheric chemistry as it initiates the oxidation of hydrocarbons in the atmosphere and so acts as a detergent .
Secondly the reaction:
is a key reaction in the formation of tropospheric ozone . [ 16 ]
The formation of the ozone layer is also caused by photodissociation. Ozone in the Earth's stratosphere is created by ultraviolet light striking oxygen molecules containing two oxygen atoms ( O 2 ), splitting them into individual oxygen atoms (atomic oxygen). The atomic oxygen then combines with unbroken O 2 to create ozone , O 3 . [ 17 ] In addition, photolysis is the process by which CFCs are broken down in the upper atmosphere to form ozone-destroying chlorine free radicals . [ 18 ]
In astrophysics , photodissociation is one of the major processes through which molecules are broken down (but new molecules are being formed). Because of the vacuum of the interstellar medium , molecules and free radicals can exist for a long time. Photodissociation is the main path by which molecules are broken down. Photodissociation rates are important in the study of the composition of interstellar clouds in which stars are formed.
Examples of photodissociation in the interstellar medium are ( hν is the energy of a single photon of frequency ν ):
Currently, orbiting satellites detect an average of about one gamma-ray burst (GRB) per day. [ 19 ] Because gamma-ray bursts are visible to distances encompassing most of the observable universe , a volume encompassing many billions of galaxies, this suggests that gamma-ray bursts must be exceedingly rare events per galaxy. [ 20 ]
Measuring the exact rate of gamma-ray bursts is difficult, but for a galaxy of approximately the same size as the Milky Way , the expected rate (for long GRBs) is about one burst every 100,000 to 1,000,000 years. [ 20 ] Only a few percent of these would be beamed toward Earth. Estimates of rates of short GRBs are even more uncertain because of the unknown beaming fraction, but are probably comparable. [ 21 ]
A gamma-ray burst in the Milky Way, if close enough to Earth and beamed toward it, could have significant effects on the biosphere . The absorption of radiation in the atmosphere would cause photodissociation of nitrogen , generating nitric oxide that would act as a catalyst to destroy ozone . [ 22 ]
The atmospheric photodissociation
would yield
(incomplete)
According to a 2004 study, a GRB at a distance of about a kiloparsec could destroy up to half of Earth's ozone layer ; the direct UV irradiation from the burst combined with additional solar UV radiation passing through the diminished ozone layer could then have potentially significant impacts on the food chain and potentially trigger a mass extinction. [ 23 ] [ 24 ] The authors estimate that one such burst is expected per billion years, and hypothesize that the Ordovician-Silurian extinction event could have been the result of such a burst.
There are strong indications that long gamma-ray bursts preferentially or exclusively occur in regions of low metallicity. Because the Milky Way has been metal-rich since before the Earth formed, this effect may diminish or even eliminate the possibility that a long gamma-ray burst has occurred within the Milky Way within the past billion years. [ 25 ] No such metallicity biases are known for short gamma-ray bursts. Thus, depending on their local rate and beaming properties, the possibility for a nearby event to have had a large impact on Earth at some point in geological time may still be significant. [ 26 ]
Single photons in the infrared spectral range usually are not energetic enough for direct photodissociation of molecules. However, after absorption of multiple infrared photons a molecule may gain internal energy to overcome its barrier for dissociation. Multiple-photon dissociation (MPD; IRMPD with infrared radiation) can be achieved by applying high-power lasers, e.g. a carbon dioxide laser , or a free-electron laser , or by long interaction times of the molecule with the radiation field without the possibility for rapid cooling, e.g. by collisions. The latter method allows even for MPD induced by black-body radiation , a technique called blackbody infrared radiative dissociation (BIRD). | https://en.wikipedia.org/wiki/Photodissociation |
In astrophysics , photodissociation regions (or photon-dominated regions , PDRs ) are predominantly neutral regions of the interstellar medium in which far ultraviolet photons strongly influence the gas chemistry and act as the most important source of heat. [ 2 ] They constitute a sort of shell around sources of far-UV photons at a distance where the interstellar gas is dense enough, and the flux from the photon source is no longer strong enough, to strip electrons from the neutral constituent atoms. [ 3 ] Despite being composed of denser gas, PDRs still have too low a column density to prevent the penetration of far-UV photons from distant, massive stars . PDRs are also composed of a cold molecular zone that has the potential for star formation. [ 4 ] They achieve this cooling by far-infrared fine line emissions of neutral oxygen and ionized carbon. [ 5 ] It is theorized that PDRs are able to maintain their shape by trapped magnetic fields originating from the far-UV source. [ 6 ] A typical and well-studied example is the gas at the boundary of a giant molecular cloud . [ 2 ] PDRs are also associated with HII regions , reflection nebulae , active galactic nuclei , and Planetary nebulae . [ 7 ] All of a galaxy's atomic gas and most of its molecular gas is found in PDRs. [ 8 ]
The closest PDRs to the Sun are IC 59 and IC 63 , near the bright Be star Gamma Cassiopeiae . [ 9 ]
The study of photodissociation regions began from early observations of the star-forming regions Orion A and M17 which showed neutral areas bright in infrared radiation lying outside ionised HII regions . [ 8 ]
This space - or spaceflight -related article is a stub . You can help Wikipedia by expanding it .
This plasma physics –related article is a stub . You can help Wikipedia by expanding it .
This astrophysics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Photodissociation_region |
In materials science , photoelasticity describes changes in the optical properties of a material under mechanical deformation . It is a property of all dielectric media and is often used to experimentally determine the stress distribution in a material.
The photoelastic phenomenon was first discovered by the Scottish physicist David Brewster , who immediately recognized it as stress-induced birefringence . [ 1 ] [ 2 ] That diagnosis was confirmed in a direct refraction experiment by Augustin-Jean Fresnel . [ 3 ] Experimental frameworks were developed at the beginning of the twentieth century with the works of E.G. Coker and L.N.G. Filon of University of London . Their book Treatise on Photoelasticity , published in 1930 by Cambridge Press , became a standard text on the subject. Between 1930 and 1940, many other books appeared on the subject, including books in Russian, German and French. Max M. Frocht published the classic two volume work, Photoelasticity , in the field. [ 4 ] At the same time, much development occurred in the field – great improvements were achieved in technique, and the equipment was simplified. With refinements in the technology, photoelastic experiments were extended to determining three-dimensional states of stress. In parallel to developments in experimental technique, the first phenomenological description of photoelasticity was given in 1890 by Friedrich Pockels , [ 5 ] however this was proved inadequate almost a century later by Nelson & Lax [ 6 ] as the description by Pockels only considered the effect of mechanical strain on the optical properties of the material.
With the advent of the digital polariscope – made possible by light-emitting diodes – continuous monitoring of structures under load became possible. This led to the development of dynamic photoelasticity, which has contributed greatly to the study of complex phenomena such as fracture of materials.
Photoelasticity has been used for a variety of stress analyses and even for routine use in design, particularly before the advent of numerical methods, such as finite elements or boundary elements . [ 7 ] Digitization of polariscopy enables fast image acquisition and data processing, which allows its industrial applications to control quality of manufacturing process for materials such as glass [ 8 ] and polymer. [ 9 ] Dentistry utilizes photoelasticity to analyze strain in denture materials. [ 10 ]
Photoelasticity can successfully be used to investigate the highly localized stress state within masonry [ 11 ] [ 12 ] [ 13 ] or in proximity of a rigid line inclusion (stiffener) embedded in an elastic medium. [ 14 ] In the former case, the problem is nonlinear due to the contacts between bricks, while in the latter case the elastic solution is singular, so that numerical methods may fail to provide correct results. These can be obtained through photoelastic techniques. Dynamic photoelasticity integrated with high-speed photography is utilized to investigate fracture behavior in materials. [ 15 ] Another important application of the photoelasticity experiments is to study the stress field around bi-material notches. [ 16 ] Bi-material notches exist in many engineering application like welded or adhesively bonded structures. [ citation needed ]
For example, some elements of Gothic cathedrals previously thought decorative were first proved essential for structural support by photoelastic methods. [ 17 ]
For a linear dielectric material the change in the inverse permittivity tensor Δ ( ε − 1 ) i j {\displaystyle \Delta (\varepsilon ^{-1})_{ij}} with respect to the deformation (the gradient of the displacement ∂ ℓ u k {\displaystyle \partial _{\ell }u_{k}} ) is described by [ 18 ]
where P i j k ℓ {\displaystyle P_{ijk\ell }} is the fourth-rank photoelasticity tensor, u ℓ {\displaystyle u_{\ell }} is the linear displacement from equilibrium, and ∂ l {\displaystyle \partial _{l}} denotes differentiation with respect to the Cartesian coordinate x l {\displaystyle x_{l}} . For isotropic materials, this definition simplifies to [ 19 ]
where p i j k ℓ {\displaystyle p_{ijk\ell }} is the symmetric part of the photoelastic tensor (the photoelastic strain tensor), and s k ℓ {\displaystyle s_{k\ell }} is the linear strain . The antisymmetric part of P i j k ℓ {\displaystyle P_{ijk\ell }} is known as the roto-optic tensor . From either definition, it is clear that deformations to the body may induce optical anisotropy, which can cause an otherwise optically isotropic material to exhibit birefringence . Although the symmetric photoelastic tensor is most commonly defined with respect to mechanical strain, it is also possible to express photoelasticity in terms of the mechanical stress .
The experimental procedure relies on the property of birefringence , as exhibited by certain transparent materials. Birefringence is a phenomenon in which a ray of light passing through a given material experiences two refractive indices . The property of birefringence (or double refraction) is observed in many optical crystals . Upon the application of stresses, photoelastic materials exhibit the property of birefringence, and the magnitude of the refractive indices at each point in the material is directly related to the state of stresses at that point. Information such as maximum shear stress and its orientation are available by analyzing the birefringence with an instrument called a polariscope .
When a ray of light passes through a photoelastic material, its electromagnetic wave components are resolved along the two principal stress directions and each component experiences a different refractive index due to the birefringence. The difference in the refractive indices leads to a relative phase retardation between the two components. Assuming a thin specimen made of isotropic materials, where two-dimensional photoelasticity is applicable, the magnitude of the relative retardation is given by the stress-optic law : [ 20 ]
where Δ is the induced retardation, C is the stress-optic coefficient , t is the specimen thickness, λ is the vacuum wavelength, and σ 1 and σ 2 are the first and second principal stresses, respectively. The retardation changes the polarization of transmitted light. The polariscope combines the different polarization states of light waves before and after passing the specimen. Due to optical interference of the two waves, a fringe pattern is revealed. The number of fringe order N is denoted as
which depends on relative retardation. By studying the fringe pattern one can determine the state of stress at various points in the material.
For materials that do not show photoelastic behavior, it is still possible to study the stress distribution. The first step is to build a model, using photoelastic materials, which has geometry similar to the real structure under investigation. The loading is then applied in the same way to ensure that the stress distribution in the model is similar to the stress in the real structure.
Isoclinics are the loci of the points in the specimen along which the principal stresses are in the same direction. [ citation needed ]
Isochromatics are the loci of the points along which the difference in the first and second principal stress remains the same. Thus they are the lines which join the points with equal maximum shear stress magnitude. [ 21 ]
Photoelasticity can describe both three-dimensional and two-dimensional states of stress. However, examining photoelasticity in three-dimensional systems is more involved than two-dimensional or plane-stress system. So the present section deals with photoelasticity in a plane stress system. This condition is achieved when the thickness of the prototype is much smaller than the dimensions in the plane. [ citation needed ] Thus one is only concerned with stresses acting parallel to the plane of the model, as other stress components are zero. The experimental setup varies from experiment to experiment. The two basic kinds of setup used are plane polariscope and circular polariscope. [ citation needed ]
The working principle of a two-dimensional experiment allows the measurement of retardation, which can be converted to the difference between the first and second principal stress and their orientation. To further get values of each stress component, a technique called stress-separation is required. [ 22 ] Several theoretical and experimental methods are utilized to provide additional information to solve individual stress components.
The setup consists of two linear polarizers and a light source. The light source can either emit monochromatic light or white light depending upon the experiment. First the light is passed through the first polarizer which converts the light into plane polarized light. The apparatus is set up in such a way that this plane polarized light then passes through the stressed specimen. This light then follows, at each point of the specimen, the direction of principal stress at that point. The light is then made to pass through the analyzer and we finally get the fringe pattern. [ citation needed ]
The fringe pattern in a plane polariscope setup consists of both the isochromatics and the isoclinics. The isoclinics change with the orientation of the polariscope while there is no change in the isochromatics. [ citation needed ]
In a circular polariscope setup two quarter- wave plates are added to the experimental setup of the plane polariscope. The first quarter-wave plate is placed in between the polarizer and the specimen and the second quarter-wave plate is placed between the specimen and the analyzer. The effect of adding the quarter-wave plate after the source-side polarizer is that we get circularly polarized light passing through the sample. The analyzer-side quarter-wave plate converts the circular polarization state back to linear before the light passes through the analyzer. [ citation needed ]
The basic advantage of a circular polariscope over a plane polariscope is that in a circular polariscope setup we only get the isochromatics and not the isoclinics. This eliminates the problem of differentiating between the isoclinics and the isochromatics. [ citation needed ] | https://en.wikipedia.org/wiki/Photoelasticity |
The photoelectric effect is the emission of electrons from a material caused by electromagnetic radiation such as ultraviolet light . Electrons emitted in this manner are called photoelectrons. The phenomenon is studied in condensed matter physics , solid state , and quantum chemistry to draw inferences about the properties of atoms, molecules and solids. The effect has found use in electronic devices specialized for light detection and precisely timed electron emission.
The experimental results disagree with classical electromagnetism , which predicts that continuous light waves transfer energy to electrons, which would then be emitted when they accumulate enough energy. An alteration in the intensity of light would theoretically change the kinetic energy of the emitted electrons, with sufficiently dim light resulting in a delayed emission. The experimental results instead show that electrons are dislodged only when the light exceeds a certain frequency —regardless of the light's intensity or duration of exposure. Because a low-frequency beam at a high intensity does not build up the energy required to produce photoelectrons, as would be the case if light's energy accumulated over time from a continuous wave, Albert Einstein proposed that a beam of light is not a wave propagating through space, but discrete energy packets, which were later popularised as photons by Gilbert N. Lewis since he coined the term 'photon' in his letter "The Conservation of Photons" to Nature published in 18 December 1926. [ 1 ] [ 2 ]
Emission of conduction electrons from typical metals requires a few electron-volt (eV) light quanta, corresponding to short-wavelength visible or ultraviolet light. In extreme cases, emissions are induced with photons approaching zero energy, like in systems with negative electron affinity and the emission from excited states, or a few hundred keV photons for core electrons in elements with a high atomic number . [ 3 ] Study of the photoelectric effect led to important steps in understanding the quantum nature of light and electrons and influenced the formation of the concept of wave–particle duality . [ 4 ] Other phenomena where light affects the movement of electric charges include the photoconductive effect, the photovoltaic effect , and the photoelectrochemical effect .
The photons of a light beam have a characteristic energy, called photon energy , which is proportional to the frequency of the light. In the photoemission process, when an electron within some material absorbs the energy of a photon and acquires more energy than its binding energy , it is likely to be ejected. If the photon energy is too low, the electron is unable to escape the material. Since an increase in the intensity of low-frequency light will only increase the number of low-energy photons, this change in intensity will not create any single photon with enough energy to dislodge an electron. Moreover, the energy of the emitted electrons will not depend on the intensity of the incoming light of a given frequency, but only on the energy of the individual photons. [ 5 ]
While free electrons can absorb any energy when irradiated as long as this is followed by an immediate re-emission, like in the Compton effect , in quantum systems all of the energy from one photon is absorbed—if the process is allowed by quantum mechanics —or none at all. Part of the acquired energy is used to liberate the electron from its atomic binding, and the rest contributes to the electron's kinetic energy as a free particle. [ 6 ] [ 7 ] [ 8 ] Because electrons in a material occupy many different quantum states with different binding energies, and because they can sustain energy losses on their way out of the material, the emitted electrons will have a range of kinetic energies. The electrons from the highest occupied states will have the highest kinetic energy. In metals, those electrons will be emitted from the Fermi level .
When the photoelectron is emitted into a solid rather than into a vacuum, the term internal photoemission is often used, and emission into a vacuum is distinguished as external photoemission .
Even though photoemission can occur from any material, it is most readily observed from metals and other conductors. This is because the process produces a charge imbalance which, if not neutralized by current flow, results in the increasing potential barrier until the emission completely ceases. The energy barrier to photoemission is usually increased by nonconductive oxide layers on metal surfaces, so most practical experiments and devices based on the photoelectric effect use clean metal surfaces in evacuated tubes. Vacuum also helps observing the electrons since it prevents gases from impeding their flow between the electrodes. [ citation needed ]
Sunlight is an inconsistent and variable source of ultraviolet light. Cloud cover, ozone concentration, altitude, and surface reflection all alter the amount of UV. Laboratory sources of UV are based on xenon arc lamps or, for more uniform but weaker light, fluorescent lamps . [ 9 ] More specialized sources include ultraviolet lasers [ 10 ] and synchrotron radiation . [ 11 ]
The classical setup to observe the photoelectric effect includes a light source, a set of filters to monochromatize the light, a vacuum tube transparent to ultraviolet light, an emitting electrode (E) exposed to the light, and a collector (C) whose voltage V C can be externally controlled. [ citation needed ]
A positive external voltage is used to direct the photoemitted electrons onto the collector. If the frequency and the intensity of the incident radiation are fixed, the photoelectric current I increases with an increase in the positive voltage, as more and more electrons are directed onto the electrode. When no additional photoelectrons can be collected, the photoelectric current attains a saturation value. This current can only increase with the increase of the intensity of light. [ citation needed ]
An increasing negative voltage prevents all but the highest-energy electrons from reaching the collector. When no current is observed through the tube, the negative voltage has reached the value that is high enough to slow down and stop the most energetic photoelectrons of kinetic energy K max . This value of the retarding voltage is called the stopping potential or cut off potential V o . [ 12 ] Since the work done by the retarding potential in stopping the electron of charge e is eV o , the following must hold eV o = K max.
The current-voltage curve is sigmoidal, but its exact shape depends on the experimental geometry and the electrode material properties.
For a given metal surface, there exists a certain minimum frequency of incident radiation below which no photoelectrons are emitted. This frequency is called the threshold frequency . Increasing the frequency of the incident beam increases the maximum kinetic energy of the emitted photoelectrons, and the stopping voltage has to increase. The number of emitted electrons may also change because the probability that each photon results in an emitted electron is a function of photon energy [ citation needed ] .
An increase in the intensity of the same monochromatic light (so long as the intensity is not too high [ 13 ] ), which is proportional to the number of photons impinging on the surface in a given time, increases the rate at which electrons are ejected—the photoelectric current I— but the kinetic energy of the photoelectrons and the stopping voltage remain the same. For a given metal and frequency of incident radiation, the rate at which photoelectrons are ejected is directly proportional to the intensity of the incident light.
The time lag between the incidence of radiation and the emission of a photoelectron is very small, less than 10 −9 second. Angular distribution of the photoelectrons is highly dependent on polarization (the direction of the electric field) of the incident light, as well as the emitting material's quantum properties such as atomic and molecular orbital symmetries and the electronic band structure of crystalline solids. In materials without macroscopic order, the distribution of electrons tends to peak in the direction of polarization of linearly polarized light. [ 14 ] The experimental technique that can measure these distributions to infer the material's properties is angle-resolved photoemission spectroscopy .
In 1905, Einstein proposed a theory of the photoelectric effect using a concept that light consists of tiny packets of energy known as photons or light quanta. Each packet carries energy h ν {\displaystyle h\nu } that is proportional to the frequency ν {\displaystyle \nu } of the corresponding electromagnetic wave. The proportionality constant h {\displaystyle h} has become known as the Planck constant . In the range of kinetic energies of the electrons that are removed from their varying atomic bindings by the absorption of a photon of energy h ν {\displaystyle h\nu } , the highest kinetic energy K max {\displaystyle K_{\max }} is K max = h ν − W . {\displaystyle K_{\max }=h\,\nu -W.} Here, W {\displaystyle W} is the minimum energy required to remove an electron from the surface of the material. It is called the work function of the surface and is sometimes denoted Φ {\displaystyle \Phi } or φ {\displaystyle \varphi } . [ 15 ] If the work function is written as W = h ν o , {\displaystyle W=h\,\nu _{o},} the formula for the maximum kinetic energy of the ejected electrons becomes K max = h ( ν − ν o ) . {\displaystyle K_{\max }=h\left(\nu -\nu _{o}\right).}
Kinetic energy is positive, and ν > ν o {\displaystyle \nu >\nu _{o}} is required for the photoelectric effect to occur. [ 16 ] The frequency ν o {\displaystyle \nu _{o}} is the threshold frequency for the given material. Above that frequency, the maximum kinetic energy of the photoelectrons as well as the stopping voltage in the experiment V o = h e ( ν − ν o ) {\textstyle V_{o}={\frac {h}{e}}\left(\nu -\nu _{o}\right)} rise linearly with the frequency, and have no dependence on the number of photons and the intensity of the impinging monochromatic light. Einstein's formula, however simple, explained all the phenomenology of the photoelectric effect, and had far-reaching consequences in the development of quantum mechanics .
Electrons that are bound in atoms, molecules and solids each occupy distinct states of well-defined binding energies . When light quanta deliver more than this amount of energy to an individual electron, the electron may be emitted into free space with excess (kinetic) energy that is h ν {\displaystyle h\nu } higher than the electron's binding energy. The distribution of kinetic energies thus reflects the distribution of the binding energies of the electrons in the atomic, molecular or crystalline system: an electron emitted from the state at binding energy E B {\displaystyle E_{B}} is found at kinetic energy E k = h ν − E B {\displaystyle E_{k}=h\nu -E_{B}} . This distribution is one of the main characteristics of the quantum system, and can be used for further studies in quantum chemistry and quantum physics. [ citation needed ]
The electronic properties of ordered, crystalline solids are determined by the distribution of the electronic states with respect to energy and momentum—the electronic band structure of the solid. Theoretical models of photoemission from solids show that this distribution is, for the most part, preserved in the photoelectric effect. The phenomenological three-step model [ 17 ] for ultraviolet and soft X-ray excitation decomposes the effect into these steps: [ 18 ] [ 19 ] [ 20 ]
There are cases where the three-step model fails to explain peculiarities of the photoelectron intensity distributions. The more elaborate one-step model [ 21 ] treats the effect as a coherent process of photoexcitation into the final state of a finite crystal for which the wave function is free-electron-like outside of the crystal, but has a decaying envelope inside. [ 20 ]
In 1839, Alexandre Edmond Becquerel discovered the related photovoltaic effect while studying the effect of light on electrolytic cells . [ 22 ] Though not equivalent to the photoelectric effect, his work on photovoltaics was instrumental in showing a strong relationship between light and electronic properties of materials. In 1873, Willoughby Smith discovered photoconductivity in selenium while testing the metal for its high resistance properties in conjunction with his work involving submarine telegraph cables. [ 23 ]
Johann Elster (1854–1920) and Hans Geitel (1855–1923), students in Heidelberg , investigated the effects produced by light on electrified bodies and developed the first practical photoelectric cells that could be used to measure the intensity of light. [ 24 ] [ 25 ] : 458 They arranged metals with respect to their power of discharging negative electricity: rubidium , potassium , alloy of potassium and sodium, sodium , lithium , magnesium , thallium and zinc ; for copper , platinum , lead , iron , cadmium , carbon , and mercury the effects with ordinary light were too small to be measurable. The order of the metals for this effect was the same as in Volta's series for contact-electricity, the most electropositive metals giving the largest photo-electric effect.
In 1887, Heinrich Hertz observed the photoelectric effect [ 26 ] and reported on the production and reception [ 27 ] of electromagnetic waves. [ 28 ] The receiver in his apparatus consisted of a coil with a spark gap , where a spark would be seen upon detection of electromagnetic waves. He placed the apparatus in a darkened box to see the spark better. However, he noticed that the maximum spark length was reduced when inside the box. A glass panel placed between the source of electromagnetic waves and the receiver absorbed ultraviolet radiation that assisted the electrons in jumping across the gap. When removed, the spark length would increase. He observed no decrease in spark length when he replaced the glass with quartz, as quartz does not absorb UV radiation. [ citation needed ]
The discoveries by Hertz led to a series of investigations by Wilhelm Hallwachs , [ 29 ] [ 30 ] Hoor, [ 31 ] Augusto Righi [ 32 ] and Aleksander Stoletov [ 33 ] [ 34 ] on the effect of light, and especially of ultraviolet light, on charged bodies. Hallwachs connected a zinc plate to an electroscope . He allowed ultraviolet light to fall on a freshly cleaned zinc plate and observed that the zinc plate became uncharged if initially negatively charged, positively charged if initially uncharged, and more positively charged if initially positively charged. From these observations he concluded that some negatively charged particles were emitted by the zinc plate when exposed to ultraviolet light.
With regard to the Hertz effect , the researchers from the start showed the complexity of the phenomenon of photoelectric fatigue—the progressive diminution of the effect observed upon fresh metallic surfaces. According to Hallwachs, ozone played an important part in the phenomenon, [ 35 ] and the emission was influenced by oxidation, humidity, and the degree of polishing of the surface. It was at the time unclear whether fatigue is absent in a vacuum. [ citation needed ]
In the period from 1888 until 1891, a detailed analysis of the photoeffect was performed by Aleksandr Stoletov with results reported in six publications. [ 34 ] Stoletov invented a new experimental setup which was more suitable for a quantitative analysis of the photoeffect. He discovered a direct proportionality between the intensity of light and the induced photoelectric current (the first law of photoeffect or Stoletov's law ). He measured the dependence of the intensity of the photo electric current on the gas pressure, where he found the existence of an optimal gas pressure corresponding to a maximum photocurrent ; this property was used for the creation of solar cells . [ citation needed ]
Many substances besides metals discharge negative electricity under the action of ultraviolet light. G. C. Schmidt [ 36 ] and O. Knoblauch [ 37 ] compiled a list of these substances.
In 1897, J. J. Thomson investigated ultraviolet light in Crookes tubes . [ 38 ] Thomson deduced that the ejected particles, which he called corpuscles, were of the same nature as cathode rays . These particles later became known as the electrons. Thomson enclosed a metal plate (a cathode) in a vacuum tube, and exposed it to high-frequency radiation. [ 39 ] It was thought that the oscillating electromagnetic fields caused the atoms' field to resonate and, after reaching a certain amplitude, caused subatomic corpuscles to be emitted, and current to be detected. The amount of this current varied with the intensity and color of the radiation. Larger radiation intensity or frequency would produce more current. [ citation needed ]
During the years 1886–1902, Wilhelm Hallwachs and Philipp Lenard investigated the phenomenon of photoelectric emission in detail. Lenard observed that a current flows through an evacuated glass tube enclosing two electrodes when ultraviolet radiation falls on one of them. As soon as ultraviolet radiation is stopped, the current also stops. This initiated the concept of photoelectric emission. The discovery of the ionization of gases by ultraviolet light was made by Philipp Lenard in 1900. As the effect was produced across several centimeters of air and yielded a greater number of positive ions than negative, it was natural to interpret the phenomenon, as J. J. Thomson did, as a Hertz effect upon the particles present in the gas. [ 28 ]
In 1902, Lenard observed that the energy of individual emitted electrons was independent of the applied light intensity. [ 6 ] [ 40 ] This appeared to be at odds with Maxwell's wave theory of light , which predicted that the electron energy would be proportional to the intensity of the radiation.
Lenard observed the variation in electron energy with light frequency using a powerful electric arc lamp which enabled him to investigate large changes in intensity. However, Lenard's results were qualitative rather than quantitative because of the difficulty in performing the experiments: the experiments needed to be done on freshly cut metal so that the pure metal was observed, but it oxidized in a matter of minutes even in the partial vacuums he used. The current emitted by the surface was determined by the light's intensity, or brightness: doubling the intensity of the light doubled the number of electrons emitted from the surface. [ citation needed ]
Initial investigation of the photoelectric effect in gasses by Lenard [ 41 ] were followed up by J. J. Thomson [ 42 ] and then more decisively by Frederic Palmer Jr. [ 43 ] [ 44 ] The gas photoemission was studied and showed very different characteristics than those at first attributed to it by Lenard. [ 28 ]
In 1900, while studying black-body radiation , the German physicist Max Planck suggested in his "On the Law of Distribution of Energy in the Normal Spectrum" [ 45 ] paper that the energy carried by electromagnetic waves could only be released in packets of energy. In 1905, Albert Einstein published a paper advancing the hypothesis that light energy is carried in discrete quantized packets to explain experimental data from the photoelectric effect. Einstein theorized that the energy in each quantum of light was equal to the frequency of light multiplied by a constant, later called the Planck constant . A photon above a threshold frequency has the required energy to eject a single electron, creating the observed effect. This was a step in the development of quantum mechanics . In 1914, Robert A. Millikan 's highly accurate measurements of the Planck constant from the photoelectric effect supported Einstein's model, even though a corpuscular theory of light was for Millikan, at the time, "quite unthinkable". [ 46 ] Einstein was awarded the 1921 Nobel Prize in Physics for "his discovery of the law of the photoelectric effect", [ 47 ] and Millikan was awarded the Nobel Prize in 1923 for "his work on the elementary charge of electricity and on the photoelectric effect". [ 48 ] In quantum perturbation theory of atoms and solids acted upon by electromagnetic radiation, the photoelectric effect is still commonly analyzed in terms of waves; the two approaches are equivalent because photon or wave absorption can only happen between quantized energy levels whose energy difference is that of the energy of photon. [ 49 ] [ 18 ]
Albert Einstein's mathematical description of how the photoelectric effect was caused by absorption of quanta of light was in one of his Annus Mirabilis papers , named "On a Heuristic Viewpoint Concerning the Production and Transformation of Light". [ 50 ] The paper proposed a simple description of energy quanta , and showed how they explained the blackbody radiation spectrum. His explanation in terms of absorption of discrete quanta of light agreed with experimental results. It explained why the energy of photoelectrons was not dependent on incident light intensity . This was a theoretical leap, but the concept was strongly resisted at first because it contradicted the wave theory of light that followed naturally from James Clerk Maxwell 's equations of electromagnetism, and more generally, the assumption of infinite divisibility of energy in physical systems.
Einstein's work predicted that the energy of individual ejected electrons increases linearly with the frequency of the light. The precise relationship had not at that time been tested. By 1905 it was known that the energy of photoelectrons increases with increasing frequency of incident light and is independent of the intensity of the light. However, the manner of the increase was not experimentally determined until 1914 when Millikan showed that Einstein's prediction was correct. [ 7 ]
The photoelectric effect helped to propel the then-emerging concept of wave–particle duality in the nature of light. Light simultaneously possesses the characteristics of both waves and particles, each being manifested according to the circumstances. The effect was impossible to understand in terms of the classical wave description of light, [ 51 ] [ 52 ] [ 53 ] as the energy of the emitted electrons did not depend on the intensity of the incident radiation. Classical theory predicted that the electrons would 'gather up' energy over a period of time, and then be emitted. [ 52 ] [ 54 ]
Research in recent years on the photoelectric effect has been focused on measurements on emission time of photoelectrons. For long, it was believed that photoemission is an instantaneous process. However, a seminal role in this field was played by experimental techniques on attosecond generation of pulses of light for studies on electron dynamics, which was recognised through the 2023 Nobel Prize in physics to Pierre Agostini, Ferenc Krausz and Anne L’Huillier. [ 55 ] For example, in 2010, it was discovered that electron emission takes 20 attoseconds and that photoemission is associated with complex multielectron correlations and is not a single-electron process. [ 56 ] In a more recent work in the context of tungsten , measurements on photoelectron emission indicated that around 100 attoseconds are required to liberate an electron. [ 57 ] In another research, the value was found to be 45 attoseconds. [ 58 ] A broad consensus is emerging towards the fact that photoemission is not instantaneous and involves finite time.
The role of electric field in photoelectric effect has also been empirically studied and it was found that electromagnetic radiation with a specific orientation of electric field can excite electrons leading to enhanced emission in the Terahertz range. [ 59 ]
These are extremely light-sensitive vacuum tubes with a coated photocathode inside the envelope. The photo cathode contains combinations of materials such as cesium, rubidium, and antimony specially selected to provide a low work function, so when illuminated even by very low levels of light, the photocathode readily releases electrons. By means of a series of electrodes (dynodes) at ever-higher potentials, these electrons are accelerated and substantially increased in number through secondary emission to provide a readily detectable output current. Photomultipliers are still commonly used wherever low levels of light must be detected. [ 60 ]
Video camera tubes in the early days of television used the photoelectric effect. For example, Philo Farnsworth 's " Image dissector " used a screen charged by the photoelectric effect to transform an optical image into a scanned electronic signal. [ 61 ]
Because the kinetic energy of the emitted electrons is exactly the energy of the incident photon minus the energy of the electron's binding within an atom, molecule or solid, the binding energy can be determined by shining a monochromatic X-ray or UV light of a known energy and measuring the kinetic energies of the photoelectrons. [ 18 ] The distribution of electron energies is valuable for studying quantum properties of these systems. It can also be used to determine the elemental composition of the samples. For solids, the kinetic energy and emission angle distribution of the photoelectrons is measured for the complete determination of the electronic band structure in terms of the allowed binding energies and momenta of the electrons. Modern instruments for angle-resolved photoemission spectroscopy are capable of measuring these quantities with a precision better than 1 meV and 0.1°. [ citation needed ]
Photoelectron spectroscopy measurements are usually performed in a high-vacuum environment, because the electrons would be scattered by gas molecules if they were present. However, some companies are now selling products that allow photoemission in air. The light source can be a laser, a discharge tube, or a synchrotron radiation source. [ 62 ]
The concentric hemispherical analyzer is a typical electron energy analyzer. It uses an electric field between two hemispheres to change (disperse) the trajectories of incident electrons depending on their kinetic energies.
Photons hitting a thin film of alkali metal or semiconductor material such as gallium arsenide in an image intensifier tube cause the ejection of photoelectrons due to the photoelectric effect. These are accelerated by an electrostatic field where they strike a phosphor coated screen, converting the electrons back into photons. Intensification of the signal is achieved either through acceleration of the electrons or by increasing the number of electrons through secondary emissions, such as with a micro-channel plate . Sometimes a combination of both methods is used. Additional kinetic energy is required to move an electron out of the conduction band and into the vacuum level. This is known as the electron affinity of the photocathode and is another barrier to photoemission other than the forbidden band, explained by the band gap model. Some materials such as gallium arsenide have an effective electron affinity that is below the level of the conduction band. In these materials, electrons that move to the conduction band all have sufficient energy to be emitted from the material, so the film that absorbs photons can be quite thick. These materials are known as negative electron affinity materials. [ citation needed ]
The photoelectric effect will cause spacecraft exposed to sunlight to develop a positive charge. This can be a major problem, as other parts of the spacecraft are in shadow which will result in the spacecraft developing a negative charge from nearby plasmas. The imbalance can discharge through delicate electrical components. The static charge created by the photoelectric effect is self-limiting, because a higher charged object does not give up its electrons as easily as a lower charged object does. [ 63 ] [ 64 ]
Light from the Sun hitting lunar dust causes it to become positively charged from the photoelectric effect. The charged dust then repels itself and lifts off the surface of the Moon by electrostatic levitation . [ 65 ] [ 66 ] This manifests itself almost like an "atmosphere of dust", visible as a thin haze and blurring of distant features, and visible as a dim glow after the sun has set. This was first photographed by the Surveyor program probes in the 1960s, [ 67 ] and most recently the Chang'e 3 rover observed dust deposition on lunar rocks as high as about 28 cm. [ 68 ] It is thought that the smallest particles are repelled kilometers from the surface and that the particles move in "fountains" as they charge and discharge. [ 69 ]
When photon energies are as high as the electron rest energy of 511 keV , yet another process, Compton scattering , may occur. Above twice this energy, at 1.022 MeV , pair production is also more likely. [ 71 ] Compton scattering and pair production are examples of two other competing mechanisms. [ citation needed ] Even if the photoelectric effect is the favoured reaction for a particular interaction of a single photon with a bound electron, the result is also subject to quantum statistics and is not guaranteed. The probability of the photoelectric effect occurring is measured by the cross section of the interaction, σ. This has been found to be a function of the atomic number of the target atom and photon energy. In a crude approximation, for photon energies above the highest atomic binding energy, the cross section is given by: [ 72 ]
Here Z is the atomic number and n is a number which varies between 4 and 5. The photoelectric effect rapidly decreases in significance in the gamma-ray region of the spectrum, with increasing photon energy. It is also more likely from elements with high atomic number. Consequently, high- Z materials make good gamma-ray shields, which is the principal reason why lead ( Z = 82) is preferred and most widely used. [ 73 ]
Applets | https://en.wikipedia.org/wiki/Photoelectric_effect |
Flame photometry is a type of atomic emission spectroscopy . It is also known as flame emission spectroscopy . [ 1 ] [ 2 ] A photoelectric flame photometer is an instrument used in inorganic chemical analysis to determine the concentration of certain metal ions, among them sodium , potassium , lithium , and calcium . [ 3 ] Group 1 (alkali metals) and Group 2 (alkaline earth metals) are quite sensitive to flame photometry due to their low excitation energies.
In principle, it is a controlled flame test with the intensity of the flame color quantified by photoelectric circuitry. The intensity of the color will depend on the energy that had been absorbed by the atoms that was sufficient to vaporise them. The sample is introduced to the flame at a constant rate. Filters select which colours the photometer detects and exclude the influence of other ions. Before use, the device requires calibration with a series of standard solutions of the ion to be tested.
Flame photometry is crude but inexpensive compared to flame emission spectroscopy or ICP-AES , where the emitted light is analyzed with a monochromator. Its status is similar to that of the colorimeter (which uses filters) compared to the spectrophotometer (which uses a monochromator). It also has the range of metals that could be analysed and the limit of detection are also considered | https://en.wikipedia.org/wiki/Photoelectric_flame_photometer |
A " photoelectrochemical cell " is one of two distinct classes of device. The first produces electrical energy similarly to a dye-sensitized photovoltaic cell , which meets the standard definition of a photovoltaic cell . The second is a photoelectrolytic cell , that is, a device which uses light incident on a photosensitizer , semiconductor , or aqueous metal immersed in an electrolytic solution to directly cause a chemical reaction, for example to produce hydrogen via the electrolysis of water .
Both types of device are varieties of solar cell , in that a photoelectrochemical cell's function is to use the photoelectric effect (or, very similarly, the photovoltaic effect ) to convert electromagnetic radiation (typically sunlight) either directly into electrical power, or into something which can itself be easily used to produce electrical power (hydrogen, for example, can be burned to create electrical power , see photohydrogen ).
The standard photovoltaic effect , as operating in standard photovoltaic cells , involves the excitation of negative charge carriers (electrons) within a semiconductor medium, and it is negative charge carriers (free electrons) which are ultimately extracted to produce power. The classification of photoelectrochemical cells which includes Grätzel cells meets this narrow definition, albeit the charge carriers are often excitonic .
The situation within a photoelectrolytic cell, on the other hand, is quite different. For example, in a water-splitting photoelectrochemical cell, the excitation, by light, of an electron in a semiconductor leaves a hole which "draws" an electron from a neighboring water molecule:
This leaves positive charge carriers (protons, that is, H+ ions) in solution, which must then bond with one other proton and combine with two electrons in order to form hydrogen gas, according to:
A photosynthetic cell is another form of photoelectrolytic cell, with the output in that case being carbohydrates instead of molecular hydrogen.
A (water-splitting) photoelectrolytic cell electrolizes water into hydrogen and oxygen gas by irradiating the anode with electromagnetic radiation , that is, with light. This has been referred to as artificial photosynthesis and has been suggested as a way of storing solar energy in hydrogen for use as fuel. [ 1 ]
Incoming sunlight excites free electrons near the surface of the silicon electrode. These electrons flow through wires to the stainless steel electrode, where four of them react with four water molecules to form two molecules of hydrogen and 4 OH groups. The OH groups flow through the liquid electrolyte to the surface of the silicon electrode. There they react with the four holes associated with the four photoelectrons, the result being two water molecules and an oxygen molecule. Illuminated silicon immediately begins to corrode under contact with the electrolytes. The corrosion consumes material and disrupts the properties of the surfaces and interfaces within the cell. [ 2 ]
Two types of photochemical systems operate via photocatalysis . One uses semiconductor surfaces as catalysts. In these devices the semiconductor surface absorbs solar energy and acts as an electrode for water splitting . The other methodology uses in-solution metal complexes as catalysts. [ 3 ] [ 4 ]
Photoelectrolytic cells have passed the 10 percent economic efficiency barrier. Corrosion of the semiconductors remains an issue, given their direct contact with water. [ 5 ] Research is now ongoing to reach a service life of 10000 hours, a requirement established by the United States Department of Energy . [ 6 ]
The first photovoltaic cell ever designed was also the first photoelectrochemical cell. It was created in 1839, by Alexandre-Edmond Becquerel , at age 19, in his father's laboratory. [ 7 ]
The mostly commonly researched modern photoelectrochemical cell in recent decades has been the Grätzel cell , although much attention has recently shifted away from this topic to perovskite solar cells , due to relatively high efficiency of the latter and the similarity in vapor assisted deposition techniques commonly used in their creation.
Dye-sensitized solar cells or Grätzel cells use dye- adsorbed highly porous nanocrystalline titanium dioxide (nc- TiO 2 ) to produce electrical energy.
Water-splitting photoelectrochemical (PEC) cells use light energy to decompose water into hydrogen and oxygen within a two-electrode cell. In theory, three arrangements of photo-electrodes in the assembly of PECs exist: [ 8 ]
There are several requirements for photoelectrode materials in PEC H 2 {\displaystyle {\ce {H2}}} production: [ 9 ]
In addition to these requirements, materials must be low-cost and earth abundant for the widespread adoption of PEC water splitting to be feasible.
While the listed requirements can be applied generally, photoanodes and photocathodes have slightly different needs. A good photocathode will have early onset of the oxygen evolution reaction (low overpotential), a large photocurrent at saturation, and rapid growth of photocurrent upon onset. Good photoanodes, on the other hand, will have early onset of the hydrogen evolution reaction in addition to high current and rapid photocurrent growth. To maximize current, anode and cathode materials need to be matched together; the best anode for one cathode material may not be the best for another.
In 1967, Akira Fujishima discovered the Honda-Fujishima effect , (the photocatalytic properties of titanium dioxide).
TiO 2 and other metal oxides are still most prominent [ 10 ] catalysts for efficiency reasons. Including SrTiO 3 and BaTiO 3 , [ 11 ] this kind of semiconducting titanates , the conduction band has mainly titanium 3d character and the valence band oxygen 2p character. The bands are separated by a wide band gap of at least 3 eV, so that these materials absorb only UV radiation .
Change of the TiO 2 microstructure has also been investigated to further improve the performance. In 2002, Guerra (Nanoptek Corporation) discovered that high localized strain could be induced in semiconductor films formed on micro to nano-structured templates, and that this strain shifted the bandgap of the semiconductor, in the case of titanium dioxide, into the visible blue. [ 12 ] It was further found (Thulin and Guerra, 2008) that the strain also favorably shifted the band-edges to overlay the hydrogen evolution potential, and further still that the strain improved hole mobility, for lower charge recombination rate and high quantum efficiency. [ 13 ] Chandekar developed a low-cost scalable manufacturing process to produce both the nano-structured template and the strained titanium dioxide coating. [ 14 ] Other morphological investigations include TiO 2 nanowire arrays or porous nanocrystalline TiO 2 photoelectrochemical cells. [ 15 ]
GaN is another option, because metal nitrides usually have a narrow band gap that could encompass almost the entire solar spectrum. [ 16 ] GaN has a narrower band gap than TiO 2 but is still large enough to allow water splitting to occur at the surface. GaN nanowires exhibited better performance than GaN thin films, because they have a larger surface area and have a high single crystallinity which allows longer electron-hole pair lifetimes. [ 17 ] Meanwhile, other non-oxide semiconductors such as GaAs , MoS 2 , WSe 2 and MoSe 2 are used as n-type electrode, due to their stability in chemical and electrochemical steps in the photocorrosion reactions. [ 18 ]
In 2013 a cell with 2 nanometers of nickel on a silicon electrode, paired with a stainless steel electrode, immersed in an aqueous electrolyte of potassium borate and lithium borate operated for 80 hours without noticeable corrosion, versus 8 hours for titanium dioxide. In the process, about 150 ml of hydrogen gas was generated, representing the storage of about 2 kilojoules of energy. [ 2 ] [ 19 ]
Structuring of absorbing materials has both positive and negative affects on cell performance. Structuring allows for light absorption and carrier collection to occur in different places, which loosens the requirements for pure materials and helps with catalysis. This allows for the use of non-precious and oxide catalysts that may be stable in more oxidizing conditions. However, these devices have lower open-circuit potentials which may contribute to lower performance. [ 20 ]
Researchers have extensively investigated the use of hematite (α-Fe 2 O 3 ) in PEC water-splitting devices due to its low cost, ability to be n-type doped, and band gap (2.2eV). However, performance is plagued by poor conductivity and crystal anisotropy. [ 21 ] Some researchers have enhanced catalytic activity by forming a layer of co-catalysts on the surface. Co-catalysts include cobalt-phosphate [ 22 ] and iridium oxide, [ 23 ] which is known to be a highly active catalyst for the oxygen evolution reaction. [ 20 ]
Tungsten(VI) oxide (WO 3 ), which exhibits several different polymorphs at various temperatures, is of interest due to its high conductivity but has a relatively wide, indirect band gap (~2.7 eV) which means it cannot absorb most of the solar spectrum. Though many attempts have been made to increase absorption, they result in poor conductivity and thus WO 3 does not appear to be a viable material for PEC water splitting. [ 20 ]
With a narrower, direct band gap (2.4 eV) and proper band alignment with water oxidation potential, the monoclinic form of BiVO 4 has garnered interest from researchers. [ 20 ] Over time, it has been shown that V-rich [ 24 ] and compact films [ 25 ] are associated with higher photocurrent, or higher performance. Bismuth Vanadate has also been studied for solar H 2 {\displaystyle {\ce {H2}}} generation from seawater, [ 26 ] which is much more difficult due to the presence of contaminating ions and a more harsh corrosive environment.
Photoelectrochemical oxidation (PECO) is the process by which light enables a semiconductor to promote a catalytic oxidation reaction. While a photoelectrochemical cell typically involves both a semiconductor (electrode) and a metal (counter-electrode), at sufficiently small scales, pure semiconductor particles can behave as microscopic photoelectrochemical cells. [ clarification needed ] PECO has applications in the detoxification of air and water, hydrogen production , and other applications.
The process by which a photon initiates a chemical reaction directly is known as photolysis ; if this process is aided by a catalyst, it is called photocatalysis . [ 27 ] If a photon has more energy than a material's characteristic band gap, it can free an electron upon absorption by the material. The remaining, positively charged hole and the free electron may recombine, generating heat, or they can take part in photoreactions with nearby species. If the photoreactions with these species result in regeneration of the electron-donating material—i.e., if the material acts as a catalyst for the reactions—then the reactions are deemed photocatalytic. PECO represents a type of photocatalysis whereby semiconductor-based electrochemistry catalyzes an oxidation reaction—for example, the oxidative degradation of an airborne contaminant in air purification systems.
The principal objective of photoelectrocatalysis is to provide low-energy activation pathways for the passage of electronic charge carriers through the electrode electrolyte interface and, in particular, for the photoelectrochemical generation of chemical products. [ 28 ] With regard to photoelectrochemical oxidation, we may consider, for example, the following system of reactions, which constitute TiO 2 -catalyzed oxidation. [ 29 ]
This system shows a number of pathways for the production of oxidative species that facilitate the oxidation of the species, RX, in addition to its direct oxidation by the excited TiO 2 itself. PECO concerns such a process where the electronic charge carriers are able to readily move through the reaction medium, thereby to some extent mitigating recombination reactions that would limit the oxidative process. The “photoelectrochemical cell” in this case could be as simple as a very small particle of the semiconductor catalyst. Here, on the “light” side a species is oxidized, while on the “dark” side a separate species is reduced. [ 30 ]
The classical macroscopic photoelectrochemical system consists of a semiconductor in electric contact with a counter-electrode. For N-type semiconductor particles of sufficiently small dimension, the particles polarize into anodic and cathodic regions, effectively forming microscopic photoelectrochemical cells. [ 28 ] The illuminated surface of a particle catalyzes a photooxidation reaction, while the “dark” side of the particle facilitates a concomitant reduction. [ 31 ]
Photoelectrochemical oxidation may be thought of as a special case of photochemical oxidation (PCO). Photochemical oxidation entails the generation of radical species that enable oxidation reactions, with or without the electrochemical interactions involved in semiconductor-catalyzed systems, which occur in photoelectrochemical oxidation. [ clarification needed ]
PECO may be useful in treating both air and water, as well as producing hydrogen as a source of renewable energy.
PECO has shown promise for water treatment of both stormwater and wastewater . Currently, water treatment methods like the use of biofiltration technologies are widely used. These technologies are effective at filtering out pollutants like suspended solids, nutrients, and heavy metals, but struggle to remove herbicides. Herbicides like diuron and atrazine are commonly used, and often end up in stormwater, posing potential health risks if they are not treated before reuse.
PECO is a useful solution to treating stormwater because of its strong oxidation capacity. Investigating different mechanisms for herbicide degradation in stormwater, like PECO, photocatalytic oxidation (PCO), and electro-catalytic oxidation (ECO), researchers determined that PECO was the best option, demonstrating complete mineralization of diuron in one hour. [ 32 ] Further research into this use for PECO is needed, as it was only able to degrade 35% of atrazine in that time, however it is a promising solution moving forward.
PECO has also shown promise as a means of air purification . For people with severe allergies, air purifiers are important to protect them from allergens within their own homes. [ 33 ] However, some allergens are too small to be removed by normal purification methods. Air purifiers using PECO filters are able to remove particles as small as 0.1 nm.
These filters work as photons excite a photocatalyst, creating hydroxyl free radicals , which are extremely reactive and oxidize organic material and microorganisms that cause allergy symptoms, forming harmless products like carbon dioxide and water. Researchers testing this technology with patients suffering from allergies drew promising conclusions from their studies, observing significant reductions in total symptom scores (TSS) for both nasal (TNSS) and ocular (TOSS) allergies after just 4 weeks of using the PECO filter. [ 34 ] This research demonstrates strong potential for impactful health improvements who suffer from severe allergies and asthma.
Possibly the most exciting potential use for PECO is producing hydrogen to be used as a source of renewable energy . Photoelectrochemical oxidation reactions that take place within PEC cells are the key to water splitting for hydrogen production. While the main concern with this technology is stability, systems that use PECO technology to create hydrogen from vapor rather than liquid water has demonstrated potential for greater stability. Early researchers working on vapor fed systems developed modules with 14% solar to hydrogen (STH) efficiency, while remaining stable for 1000+ hours. [ 35 ] More recently, further technological developments have been made, demonstrated by the direct air electrolysis (DAE) module developed by Jining Guo and his team, which produces 99% pure hydrogen from the air and has demonstrated stability of 8 months thus far. [ 36 ]
Promising research and technological advancement using PECO for different applications like water and air treatment and hydrogen production suggests that it is a valuable tool that can be utilized in a variety of ways.
In 1938, Goodeve and Kitchener demonstrated the “photosensitization” of TiO 2 —e.g., as evidenced by the fading of paints incorporating it as a pigment. [ 37 ] In 1969, Kinney and Ivanuski suggested that a variety of metal oxides, including TiO 2 , may catalyze the oxidation of dissolved organic materials (phenol, benzoic acid, acetic acid, sodium stearate, and sucrose) under illumination by sunlamps. [ 38 ] Additional work by Carey et al. suggested that TiO 2 may be useful for the photodechlorination of PCBs. [ 39 ] | https://en.wikipedia.org/wiki/Photoelectrochemical_cell |
A " photoelectrochemical cell " is one of two distinct classes of device. The first produces electrical energy similarly to a dye-sensitized photovoltaic cell , which meets the standard definition of a photovoltaic cell . The second is a photoelectrolytic cell , that is, a device which uses light incident on a photosensitizer , semiconductor , or aqueous metal immersed in an electrolytic solution to directly cause a chemical reaction, for example to produce hydrogen via the electrolysis of water .
Both types of device are varieties of solar cell , in that a photoelectrochemical cell's function is to use the photoelectric effect (or, very similarly, the photovoltaic effect ) to convert electromagnetic radiation (typically sunlight) either directly into electrical power, or into something which can itself be easily used to produce electrical power (hydrogen, for example, can be burned to create electrical power , see photohydrogen ).
The standard photovoltaic effect , as operating in standard photovoltaic cells , involves the excitation of negative charge carriers (electrons) within a semiconductor medium, and it is negative charge carriers (free electrons) which are ultimately extracted to produce power. The classification of photoelectrochemical cells which includes Grätzel cells meets this narrow definition, albeit the charge carriers are often excitonic .
The situation within a photoelectrolytic cell, on the other hand, is quite different. For example, in a water-splitting photoelectrochemical cell, the excitation, by light, of an electron in a semiconductor leaves a hole which "draws" an electron from a neighboring water molecule:
This leaves positive charge carriers (protons, that is, H+ ions) in solution, which must then bond with one other proton and combine with two electrons in order to form hydrogen gas, according to:
A photosynthetic cell is another form of photoelectrolytic cell, with the output in that case being carbohydrates instead of molecular hydrogen.
A (water-splitting) photoelectrolytic cell electrolizes water into hydrogen and oxygen gas by irradiating the anode with electromagnetic radiation , that is, with light. This has been referred to as artificial photosynthesis and has been suggested as a way of storing solar energy in hydrogen for use as fuel. [ 1 ]
Incoming sunlight excites free electrons near the surface of the silicon electrode. These electrons flow through wires to the stainless steel electrode, where four of them react with four water molecules to form two molecules of hydrogen and 4 OH groups. The OH groups flow through the liquid electrolyte to the surface of the silicon electrode. There they react with the four holes associated with the four photoelectrons, the result being two water molecules and an oxygen molecule. Illuminated silicon immediately begins to corrode under contact with the electrolytes. The corrosion consumes material and disrupts the properties of the surfaces and interfaces within the cell. [ 2 ]
Two types of photochemical systems operate via photocatalysis . One uses semiconductor surfaces as catalysts. In these devices the semiconductor surface absorbs solar energy and acts as an electrode for water splitting . The other methodology uses in-solution metal complexes as catalysts. [ 3 ] [ 4 ]
Photoelectrolytic cells have passed the 10 percent economic efficiency barrier. Corrosion of the semiconductors remains an issue, given their direct contact with water. [ 5 ] Research is now ongoing to reach a service life of 10000 hours, a requirement established by the United States Department of Energy . [ 6 ]
The first photovoltaic cell ever designed was also the first photoelectrochemical cell. It was created in 1839, by Alexandre-Edmond Becquerel , at age 19, in his father's laboratory. [ 7 ]
The mostly commonly researched modern photoelectrochemical cell in recent decades has been the Grätzel cell , although much attention has recently shifted away from this topic to perovskite solar cells , due to relatively high efficiency of the latter and the similarity in vapor assisted deposition techniques commonly used in their creation.
Dye-sensitized solar cells or Grätzel cells use dye- adsorbed highly porous nanocrystalline titanium dioxide (nc- TiO 2 ) to produce electrical energy.
Water-splitting photoelectrochemical (PEC) cells use light energy to decompose water into hydrogen and oxygen within a two-electrode cell. In theory, three arrangements of photo-electrodes in the assembly of PECs exist: [ 8 ]
There are several requirements for photoelectrode materials in PEC H 2 {\displaystyle {\ce {H2}}} production: [ 9 ]
In addition to these requirements, materials must be low-cost and earth abundant for the widespread adoption of PEC water splitting to be feasible.
While the listed requirements can be applied generally, photoanodes and photocathodes have slightly different needs. A good photocathode will have early onset of the oxygen evolution reaction (low overpotential), a large photocurrent at saturation, and rapid growth of photocurrent upon onset. Good photoanodes, on the other hand, will have early onset of the hydrogen evolution reaction in addition to high current and rapid photocurrent growth. To maximize current, anode and cathode materials need to be matched together; the best anode for one cathode material may not be the best for another.
In 1967, Akira Fujishima discovered the Honda-Fujishima effect , (the photocatalytic properties of titanium dioxide).
TiO 2 and other metal oxides are still most prominent [ 10 ] catalysts for efficiency reasons. Including SrTiO 3 and BaTiO 3 , [ 11 ] this kind of semiconducting titanates , the conduction band has mainly titanium 3d character and the valence band oxygen 2p character. The bands are separated by a wide band gap of at least 3 eV, so that these materials absorb only UV radiation .
Change of the TiO 2 microstructure has also been investigated to further improve the performance. In 2002, Guerra (Nanoptek Corporation) discovered that high localized strain could be induced in semiconductor films formed on micro to nano-structured templates, and that this strain shifted the bandgap of the semiconductor, in the case of titanium dioxide, into the visible blue. [ 12 ] It was further found (Thulin and Guerra, 2008) that the strain also favorably shifted the band-edges to overlay the hydrogen evolution potential, and further still that the strain improved hole mobility, for lower charge recombination rate and high quantum efficiency. [ 13 ] Chandekar developed a low-cost scalable manufacturing process to produce both the nano-structured template and the strained titanium dioxide coating. [ 14 ] Other morphological investigations include TiO 2 nanowire arrays or porous nanocrystalline TiO 2 photoelectrochemical cells. [ 15 ]
GaN is another option, because metal nitrides usually have a narrow band gap that could encompass almost the entire solar spectrum. [ 16 ] GaN has a narrower band gap than TiO 2 but is still large enough to allow water splitting to occur at the surface. GaN nanowires exhibited better performance than GaN thin films, because they have a larger surface area and have a high single crystallinity which allows longer electron-hole pair lifetimes. [ 17 ] Meanwhile, other non-oxide semiconductors such as GaAs , MoS 2 , WSe 2 and MoSe 2 are used as n-type electrode, due to their stability in chemical and electrochemical steps in the photocorrosion reactions. [ 18 ]
In 2013 a cell with 2 nanometers of nickel on a silicon electrode, paired with a stainless steel electrode, immersed in an aqueous electrolyte of potassium borate and lithium borate operated for 80 hours without noticeable corrosion, versus 8 hours for titanium dioxide. In the process, about 150 ml of hydrogen gas was generated, representing the storage of about 2 kilojoules of energy. [ 2 ] [ 19 ]
Structuring of absorbing materials has both positive and negative affects on cell performance. Structuring allows for light absorption and carrier collection to occur in different places, which loosens the requirements for pure materials and helps with catalysis. This allows for the use of non-precious and oxide catalysts that may be stable in more oxidizing conditions. However, these devices have lower open-circuit potentials which may contribute to lower performance. [ 20 ]
Researchers have extensively investigated the use of hematite (α-Fe 2 O 3 ) in PEC water-splitting devices due to its low cost, ability to be n-type doped, and band gap (2.2eV). However, performance is plagued by poor conductivity and crystal anisotropy. [ 21 ] Some researchers have enhanced catalytic activity by forming a layer of co-catalysts on the surface. Co-catalysts include cobalt-phosphate [ 22 ] and iridium oxide, [ 23 ] which is known to be a highly active catalyst for the oxygen evolution reaction. [ 20 ]
Tungsten(VI) oxide (WO 3 ), which exhibits several different polymorphs at various temperatures, is of interest due to its high conductivity but has a relatively wide, indirect band gap (~2.7 eV) which means it cannot absorb most of the solar spectrum. Though many attempts have been made to increase absorption, they result in poor conductivity and thus WO 3 does not appear to be a viable material for PEC water splitting. [ 20 ]
With a narrower, direct band gap (2.4 eV) and proper band alignment with water oxidation potential, the monoclinic form of BiVO 4 has garnered interest from researchers. [ 20 ] Over time, it has been shown that V-rich [ 24 ] and compact films [ 25 ] are associated with higher photocurrent, or higher performance. Bismuth Vanadate has also been studied for solar H 2 {\displaystyle {\ce {H2}}} generation from seawater, [ 26 ] which is much more difficult due to the presence of contaminating ions and a more harsh corrosive environment.
Photoelectrochemical oxidation (PECO) is the process by which light enables a semiconductor to promote a catalytic oxidation reaction. While a photoelectrochemical cell typically involves both a semiconductor (electrode) and a metal (counter-electrode), at sufficiently small scales, pure semiconductor particles can behave as microscopic photoelectrochemical cells. [ clarification needed ] PECO has applications in the detoxification of air and water, hydrogen production , and other applications.
The process by which a photon initiates a chemical reaction directly is known as photolysis ; if this process is aided by a catalyst, it is called photocatalysis . [ 27 ] If a photon has more energy than a material's characteristic band gap, it can free an electron upon absorption by the material. The remaining, positively charged hole and the free electron may recombine, generating heat, or they can take part in photoreactions with nearby species. If the photoreactions with these species result in regeneration of the electron-donating material—i.e., if the material acts as a catalyst for the reactions—then the reactions are deemed photocatalytic. PECO represents a type of photocatalysis whereby semiconductor-based electrochemistry catalyzes an oxidation reaction—for example, the oxidative degradation of an airborne contaminant in air purification systems.
The principal objective of photoelectrocatalysis is to provide low-energy activation pathways for the passage of electronic charge carriers through the electrode electrolyte interface and, in particular, for the photoelectrochemical generation of chemical products. [ 28 ] With regard to photoelectrochemical oxidation, we may consider, for example, the following system of reactions, which constitute TiO 2 -catalyzed oxidation. [ 29 ]
This system shows a number of pathways for the production of oxidative species that facilitate the oxidation of the species, RX, in addition to its direct oxidation by the excited TiO 2 itself. PECO concerns such a process where the electronic charge carriers are able to readily move through the reaction medium, thereby to some extent mitigating recombination reactions that would limit the oxidative process. The “photoelectrochemical cell” in this case could be as simple as a very small particle of the semiconductor catalyst. Here, on the “light” side a species is oxidized, while on the “dark” side a separate species is reduced. [ 30 ]
The classical macroscopic photoelectrochemical system consists of a semiconductor in electric contact with a counter-electrode. For N-type semiconductor particles of sufficiently small dimension, the particles polarize into anodic and cathodic regions, effectively forming microscopic photoelectrochemical cells. [ 28 ] The illuminated surface of a particle catalyzes a photooxidation reaction, while the “dark” side of the particle facilitates a concomitant reduction. [ 31 ]
Photoelectrochemical oxidation may be thought of as a special case of photochemical oxidation (PCO). Photochemical oxidation entails the generation of radical species that enable oxidation reactions, with or without the electrochemical interactions involved in semiconductor-catalyzed systems, which occur in photoelectrochemical oxidation. [ clarification needed ]
PECO may be useful in treating both air and water, as well as producing hydrogen as a source of renewable energy.
PECO has shown promise for water treatment of both stormwater and wastewater . Currently, water treatment methods like the use of biofiltration technologies are widely used. These technologies are effective at filtering out pollutants like suspended solids, nutrients, and heavy metals, but struggle to remove herbicides. Herbicides like diuron and atrazine are commonly used, and often end up in stormwater, posing potential health risks if they are not treated before reuse.
PECO is a useful solution to treating stormwater because of its strong oxidation capacity. Investigating different mechanisms for herbicide degradation in stormwater, like PECO, photocatalytic oxidation (PCO), and electro-catalytic oxidation (ECO), researchers determined that PECO was the best option, demonstrating complete mineralization of diuron in one hour. [ 32 ] Further research into this use for PECO is needed, as it was only able to degrade 35% of atrazine in that time, however it is a promising solution moving forward.
PECO has also shown promise as a means of air purification . For people with severe allergies, air purifiers are important to protect them from allergens within their own homes. [ 33 ] However, some allergens are too small to be removed by normal purification methods. Air purifiers using PECO filters are able to remove particles as small as 0.1 nm.
These filters work as photons excite a photocatalyst, creating hydroxyl free radicals , which are extremely reactive and oxidize organic material and microorganisms that cause allergy symptoms, forming harmless products like carbon dioxide and water. Researchers testing this technology with patients suffering from allergies drew promising conclusions from their studies, observing significant reductions in total symptom scores (TSS) for both nasal (TNSS) and ocular (TOSS) allergies after just 4 weeks of using the PECO filter. [ 34 ] This research demonstrates strong potential for impactful health improvements who suffer from severe allergies and asthma.
Possibly the most exciting potential use for PECO is producing hydrogen to be used as a source of renewable energy . Photoelectrochemical oxidation reactions that take place within PEC cells are the key to water splitting for hydrogen production. While the main concern with this technology is stability, systems that use PECO technology to create hydrogen from vapor rather than liquid water has demonstrated potential for greater stability. Early researchers working on vapor fed systems developed modules with 14% solar to hydrogen (STH) efficiency, while remaining stable for 1000+ hours. [ 35 ] More recently, further technological developments have been made, demonstrated by the direct air electrolysis (DAE) module developed by Jining Guo and his team, which produces 99% pure hydrogen from the air and has demonstrated stability of 8 months thus far. [ 36 ]
Promising research and technological advancement using PECO for different applications like water and air treatment and hydrogen production suggests that it is a valuable tool that can be utilized in a variety of ways.
In 1938, Goodeve and Kitchener demonstrated the “photosensitization” of TiO 2 —e.g., as evidenced by the fading of paints incorporating it as a pigment. [ 37 ] In 1969, Kinney and Ivanuski suggested that a variety of metal oxides, including TiO 2 , may catalyze the oxidation of dissolved organic materials (phenol, benzoic acid, acetic acid, sodium stearate, and sucrose) under illumination by sunlamps. [ 38 ] Additional work by Carey et al. suggested that TiO 2 may be useful for the photodechlorination of PCBs. [ 39 ] | https://en.wikipedia.org/wiki/Photoelectrochemical_oxidation |
Photoelectrochemical processes are processes in photoelectrochemistry ; they usually involve transforming light into other forms of energy. [ 1 ] These processes apply to photochemistry, optically pumped lasers , sensitized solar cells , luminescence , and photochromism .
Electron excitation is the movement of an electron to a higher energy state . This can either be done by photoexcitation (PE), where the original electron absorbs the photon and gains all the photon's energy or by electrical excitation (EE), where the original electron absorbs the energy of another, energetic electron. Within a semiconductor crystal lattice, thermal excitation is a process where lattice vibrations provide enough energy to move electrons to a higher energy band . When an excited electron falls back to a lower energy state again, it is called electron relaxation. This can be done by radiation of a photon or giving the energy to a third spectator particle as well. [ 2 ]
In physics there is a specific technical definition for energy level which is often associated with an atom being excited to an excited state . The excited state, in general, is in relation to the ground state , where the excited state is at a higher energy level than the ground state.
Photoexcitation is the mechanism of electron excitation by photon absorption, when the energy of the photon is too low to cause photoionization. The absorption of the photon takes place in accordance with Planck's quantum theory.
Photoexcitation plays role in photoisomerization. Photoexcitation is exploited in dye-sensitized solar cells , photochemistry , luminescence , optically pumped lasers, and in some photochromic applications.
In chemistry , photoisomerization is molecular behavior in which structural change between isomers is caused by photoexcitation. Both reversible and irreversible photoisomerization reactions exist. However, the word "photoisomerization" usually indicates a reversible process. Photoisomerizable molecules are already put to practical use, for instance, in pigments for rewritable CDs , DVDs , and 3D optical data storage solutions. In addition, recent interest in photoisomerizable molecules has been aimed at molecular devices, such as molecular switches, [ 3 ] molecular motors, [ 4 ] and molecular electronics.
Photoisomerization behavior can be roughly categorized into several classes. Two major classes are trans-cis (or 'E-'Z) conversion, and open-closed ring transition. Examples of the former include stilbene and azobenzene . This type of compounds has a double bond , and rotation or inversion around the double bond affords isomerization between the two states. Examples of the latter include fulgide and diarylethene . This type of compounds undergoes bond cleavage and bond creation upon irradiation with particular wavelengths of light. Still another class is the di-π-methane rearrangement .
Photoionization is the physical process in which an incident photon ejects one or more electrons from an atom , ion or molecule . This is essentially the same process that occurs with the photoelectric effect with metals. In the case of a gas or single atoms, the term photoionization is more common. [ 5 ]
The ejected electrons, known as photoelectrons , carry information about their pre-ionized states. For example, a single electron can have a kinetic energy equal to the energy of the incident photon minus the electron binding energy of the state it left. Photons with energies less than the electron binding energy may be absorbed or scattered but will not photoionize the atom or ion. [ 5 ]
For example, to ionize hydrogen , photons need an energy greater than 13.6 electronvolts (the Rydberg energy ), which corresponds to a wavelength of 91.2 nm . [ 6 ] For photons with greater energy than this, the energy of the emitted photoelectron is given by:
where h is the Planck constant and ν is the frequency of the photon.
This formula defines the photoelectric effect .
Not every photon which encounters an atom or ion will photoionize it. The probability of photoionization is related to the photoionization cross-section , which depends on the energy of the photon and the target being considered. For photon energies below the ionization threshold, the photoionization cross-section is near zero. But with the development of pulsed lasers it has become possible to create extremely intense, coherent light where multi-photon ionization may occur. At even higher intensities (around 10 15 - 10 16 W/cm 2 of infrared or visible light), non-perturbative phenomena such as barrier suppression ionization [ 7 ] and rescattering ionization [ 8 ] are observed.
Several photons of energy below the ionization threshold may actually combine their energies to ionize an atom. This probability decreases rapidly with the number of photons required, but the development of very intense, pulsed lasers still makes it possible. In the perturbative regime (below about 10 14 W/cm 2 at optical frequencies), the probability of absorbing N photons depends on the laser-light intensity I as I N . [ 9 ]
Above threshold ionization (ATI) [ 10 ] is an extension of multi-photon ionization where even more photons are absorbed than actually would be necessary to ionize the atom. The excess energy gives the released electron higher kinetic energy than the usual case of just-above threshold ionization. More precisely, the system will have multiple peaks in its photoelectron spectrum which are separated by the photon energies, this indicates that the emitted electron has more kinetic energy than in the normal (lowest possible number of photons) ionization case. The electrons released from the target will have approximately an integer number of photon-energies more kinetic energy. In intensity regions between 10 14 W/cm 2 and 10 18 W/cm 2 , each of MPI, ATI, and barrier suppression ionization can occur simultaneously, each contributing to the overall ionization of the atoms involved. [ 11 ]
In semiconductor physics the Photo-Dember effect (named after its discoverer H. Dember) consists in the formation of a charge dipole in the vicinity of a semiconductor surface after ultra-fast photo-generation of charge carriers. The dipole forms owing to the difference of mobilities (or diffusion constants) for holes and electrons which combined with the break of symmetry provided by the surface lead to an effective charge separation in the direction perpendicular to the surface. [ 12 ]
The Grotthuss–Draper law (also called the principle of photochemical activation ) states that only that light which is absorbed by a system can bring about a photochemical change. Materials such as dyes and phosphors must be able to absorb "light" at optical frequencies. This law provides a basis for fluorescence and phosphorescence . The law was first proposed in 1817 by Theodor Grotthuss and in 1842, independently, by John William Draper . [ 5 ]
This is considered to be one of the two basic laws of photochemistry . The second law is the Stark–Einstein law , which says that primary chemical or physical reactions occur with each photon absorbed. [ 5 ]
The Stark–Einstein law is named after German-born physicists Johannes Stark and Albert Einstein , who independently formulated the law between 1908 and 1913. It is also known as the photochemical equivalence law or photoequivalence law . In essence it says that every photon that is absorbed will cause a (primary) chemical or physical reaction. [ 13 ]
The photon is a quantum of radiation, or one unit of radiation. Therefore, this is a single unit of EM radiation that is equal to the Planck constant ( h ) times the frequency of light. This quantity is symbolized by γ , hν , or ħω .
The photochemical equivalence law is also restated as follows: for every mole of a substance that reacts, an equivalent mole of quanta of light are absorbed. The formula is: [ 13 ]
where N A is the Avogadro constant .
The photochemical equivalence law applies to the part of a light-induced reaction that is referred to as the primary process (i.e. absorption or fluorescence ). [ 13 ]
In most photochemical reactions the primary process is usually followed by so-called secondary photochemical processes that are normal interactions between reactants not requiring absorption of light. As a result, such reactions do not appear to obey the one quantum–one molecule reactant relationship. [ 13 ]
The law is further restricted to conventional photochemical processes using light sources with moderate intensities; high-intensity light sources such as those used in flash photolysis and in laser experiments are known to cause so-called biphotonic processes; i.e., the absorption by a molecule of a substance of two photons of light. [ 13 ]
In physics , absorption of electromagnetic radiation is the way by which the energy of a photon is taken up by matter, typically the electrons of an atom. Thus, the electromagnetic energy is transformed to other forms of energy, for example, to heat. The absorption of light during wave propagation is often called attenuation . Usually, the absorption of waves does not depend on their intensity (linear absorption), although in certain conditions (usually, in optics ), the medium changes its transparency dependently on the intensity of waves going through, and the Saturable absorption (or nonlinear absorption) occurs.
Photosensitization is a process of transferring the energy of absorbed light. After absorption, the energy is transferred to the (chosen) reactants . This is part of the work of photochemistry in general. In particular this process is commonly employed where reactions require light sources of certain wavelengths that are not readily available. [ 14 ]
For example, mercury absorbs radiation at 1849 and 2537 angstroms , and the source is often high-intensity mercury lamps . It is a commonly used sensitizer. When mercury vapor is mixed with ethylene , and the compound is irradiated with a mercury lamp, this results in the photodecomposition of ethylene to acetylene. This occurs on absorption of light to yield excited state mercury atoms, which are able to transfer this energy to the ethylene molecules, and are in turn deactivated to their initial energy state. [ 14 ]
Cadmium ; some of the noble gases , for example xenon ; zinc ; benzophenone ; and a large number of organic dyes, are also used as sensitizers. [ 14 ]
Photosensitisers are a key component of photodynamic therapy used to treat cancers.
A sensitizer in chemiluminescence is a chemical compound, capable of light emission after it has received energy from a molecule, which became excited previously in the chemical reaction. A good example is this:
When an alkaline solution of sodium hypochlorite and a concentrated solution of hydrogen peroxide are mixed, a reaction occurs:
O 2 *is excited oxygen – meaning, one or more electrons in the O 2 molecule have been promoted to higher-energy molecular orbitals . Hence, oxygen produced by this chemical reaction somehow 'absorbed' the energy released by the reaction and became excited. This energy state is unstable, therefore it will return to the ground state by lowering its energy. It can do that in more than one way:
The intensity, duration and color of emitted light depend on quantum and kinetical factors. However, excited molecules are frequently less capable of light emission in terms of brightness and duration when compared to sensitizers. This is because sensitizers can store energy (that is, be excited) for longer periods of time than other excited molecules. The energy is stored through means of quantum vibration , so sensitizers are usually compounds which either include systems of aromatic rings or many conjugated double and triple bonds in their structure. Hence, if an excited molecule transfers its energy to a sensitizer thus exciting it, longer and easier to quantify light emission is often observed.
The color (that is, the wavelength ), brightness and duration of emission depend upon the sensitizer used. Usually, for a certain chemical reaction, many different sensitizers can be used.
Fluorescence spectroscopy aka fluorometry or spectrofluorometry, is a type of electromagnetic spectroscopy which analyzes fluorescence from a sample. It involves using a beam of light, usually ultraviolet light , that excites the electrons in molecules of certain compounds and causes them to emit light of a lower energy, typically, but not necessarily, visible light . A complementary technique is absorption spectroscopy . [ 15 ] [ 16 ]
Devices that measure fluorescence are called fluorometers or fluorimeters.
Absorption spectroscopy refers to spectroscopic techniques that measure the absorption of radiation, as a function of frequency or wavelength, due to its interaction with a sample. The sample absorbs energy, i.e., photons, from the radiating field. The intensity of the absorption varies as a function of frequency, and this variation is the absorption spectrum . Absorption spectroscopy is performed across the electromagnetic spectrum . [ 15 ] [ 16 ] | https://en.wikipedia.org/wiki/Photoelectrochemical_process |
Photoelectrochemical reduction of carbon dioxide , also known as photoelectrolysis of carbon dioxide , is a chemical process whereby carbon dioxide is reduced to carbon monoxide or hydrocarbons by the energy of incident light. This process requires catalysts , most of which are semiconducting materials . The feasibility of this chemical reaction was first theorised by Giacomo Luigi Ciamician , an Italian photochemist. Already in 1912 he stated that "[b]y using suitable catalyzers, it should be possible to transform the mixture of water and carbon dioxide into oxygen and methane, or to cause other endo-energetic processes."
Furthermore, the reduced species may prove to be a valuable feedstock for other processes. If the incident light utilized is solar then this process also potentially represents energy routes which combine renewable energy with CO 2 reduction.
Thermodynamic potentials for the reduction of CO 2 to various products is given in the following table versus NHE at pH = 7. Single electron reduction of CO 2 to CO 2 ●− radical occurs at E° = −1.90 V versus NHE at pH = 7 in an aqueous solution at 25 °C under 1 atm gas pressure . The reason behind the high negative thermodynamically unfavorable single electron reduction potential of CO 2 is the large reorganization energy between the linear molecule and bent radical anion . Proton-coupled multi-electron steps for CO 2 reductions are generally more favorable than single electron reductions, as thermodynamically more stable molecules are produced. [ 1 ]
Thermodynamically, proton coupled multiple-electron reduction of CO 2 is easier than single electron reduction. But to manage multiple proton coupled multiple-electron processes is a huge challenge kinetically. This leads to a high overpotential for electrochemical heterogeneous reduction of CO 2 to hydrocarbons and alcohols. Even further heterogeneous reduction of singly reduced CO 2 ●− radical anion is difficult because of repulsive interaction between negatively biased electrode and negatively charged anion.
Figure 2 shows that in case of a p-type semiconductor/liquid junction photo generated electrons are available at the semiconductor/liquid interface under illumination. The reduction of redox species happens at less negative potential on illuminated p-type semiconductor compared to metal electrode due to the band bending at semiconductor/liquid interface. Figure 3 shows that thermodynamically, some of the proton-coupled multi-electron CO 2 reductions are within semiconductors band gap . This makes it feasible to photo-reduce CO 2 on p-type semiconductors. Various p-type semiconductors have been successfully employed for CO 2 photo reduction including p-GaP, p-CdTe, p-Si, p-GaAs, p-InP, and p-SiC. Kinetically, however, these reactions are extremely slow on given semiconductor surfaces; this leads to significant overpotential for CO 2 reduction on these semiconductor surfaces. Apart from high overpotential ; these systems have a few advantages including sustainability (nothing is consumed in this system apart from light energy), direct conversion of solar energy to chemical energy, utilization of renewable energy resource for energy intensive process, stability of the process (semiconductors are really stable under illumination) etc. A different approach for photo-reduction of CO 2 involves molecular catalysts, photosensitizers and sacrificial electron donors. In this process sacrificial electron donors are consumed during the process and photosensitizers degrade under long exposure to illumination.
The photo-reduction of CO 2 on p-type semiconductor photo-electrodes has been achieved in both aqueous and non-aqueous media. Main difference between aqueous and non-aqueous media is the solubility of CO 2 . The solubility of CO 2 in aqueous media at 1 atm. of CO 2 is around ≈ 35 mM; whereas solubility of CO 2 in methanol is around 210 mM and in acetonitrile is around 210 mM.
Photoreduction of CO 2 to formic acid was demonstrated on an p-GaP photocathode in aqueous media. [ 2 ] Apart from several other reports of CO 2 photoreduction on p-GaP, there are other p-type semiconductors like p-GaAs, [ 3 ] p-InP, p-CdTe, [ 4 ] and p + /p-Si [ 5 ] have been successfully used for photoreduction of CO 2 . The lowest potential for CO 2 photoreduction was observed on p-GaP. This may be due to high photovoltage excepted from higher band gap p-GaP (2.2 eV) photocathode. Apart from formic acid, other products observed for CO 2 photoreduction are formaldehyde , methanol and carbon monoxide . On p-GaP, p-GaAs and p + /p-Si photocathode, the main product is formic acid with small amount of formaldehyde and methanol. However, for p-InP and p-CdTe photocathode, both carbon monoxide and formic acid are observed in similar quantities. Mechanism proposed by Hori [ 6 ] based on CO 2 reduction on metal electrodes predicts formation of both formic acid (in case of no adsorption of singly reduced CO 2 ●− radical anion to the surface) and carbon monoxide (in case of adsorption of singly reduced CO 2 ●− radical anion to the surface) in aqueous media. This same mechanism can be evoked to explain the formation of mainly formic acid on p-GaP, p-GaAs and p + /p-Si photocathode owing to no adsorption of singly reduced CO 2 ●− radical anion to the surface. In case of p-InP and p-CdTe photocathode, partial adsorption of CO 2 ●− radical anion leads to formation of both carbon monoxide and formic acid. Low catalytic current density for CO 2 photoreduction and competitive hydrogen generation are two major drawbacks of this system.
Maximum catalytic current density for CO 2 reduction that can be achieved in aqueous media is only 10 mA cm −2 based solubility of CO 2 and diffusion limitations. [ 7 ] The integrated maximum photocurrent under Air Mass 1.5 illumination, in the conventional Shockley-Quiesser limit for solar energy conversion for p-Si (1.12 eV), p-InP (1.3 eV), p-GaAs (1.4 eV), and p-GaP (2.3 eV) are 44.0 mA cm −2 , 37.0 mA cm −2 , 32.5 mA cm −2 and 9.0 mA cm −2 , respectively. [ 8 ] Therefore, non-aqueous media such as DMF, acetonitrile, methanol are explored as solvent for CO 2 electrochemical reduction. In addition, Methanol has been industrially used as a physical absorber of CO 2 in the Rectisol method . [ 9 ] Similarly to aqueous media system, p-Si, p-InP, p-GaAs, p-GaP and p-CdTe are explored for CO 2 photoelectrochemical reduction. Among these, p-GaP has lowest overpotential, whereas, p-CdTe has moderate overpotential but high catalytic current density in DMF with 5% water mixture system. [ 10 ] Main product of CO 2 reduction in non-aqueous media is carbon monoxide. Competitive hydrogen generation is minimized in non-aqueous media. Proposed mechanism for CO 2 reduction to CO in non-aqueous media involves single electron reduction of CO 2 to CO 2 ●− radical anion and adsorption of radical anion to surface followed by disproportionate reaction between unreduced CO 2 and CO 2 ●− radical anion to form CO 3 2− and CO. | https://en.wikipedia.org/wiki/Photoelectrochemical_reduction_of_carbon_dioxide |
Photoelectrochemistry is a subfield of study within physical chemistry concerned with the interaction of light with electrochemical systems . [ 1 ] [ 2 ] It is an active domain of investigation. One of the pioneers of this field of electrochemistry was the German electrochemist Heinz Gerischer . The interest in this domain is high in the context of development of renewable energy conversion and storage technology.
Photoelectrochemistry has been intensively studied in the 1970-80s because of the first peak oil crisis . Because fossil fuels are non-renewable, it is necessary to develop processes to obtain renewable resources and use clean energy . Artificial photosynthesis , photoelectrochemical water splitting and regenerative solar cells are of special interest in this context. The photovoltaic effect was discovered by Alexandre Edmond Becquerel .
Heinz Gerischer , H. Tributsch, AJ. Nozik, AJ. Bard, A. Fujishima, K. Honda, PE. Laibinis, K. Rajeshwar, TJ Meyer, PV. Kamat, N.S. Lewis, R. Memming, John Bockris are researchers which have contributed a lot to the field of photoelectrochemistry.
Semiconductor materials have energy band gaps , and will generate a pair of electron and hole for each absorbed photon if the energy of the photon is higher than the band gap energy of the semiconductor. This property of semiconductor materials has been successfully used to convert solar energy into electrical energy by photovoltaic devices .
In photocatalysis the electron-hole pair is immediately used to drive a redox reaction. However, the electron-hole pairs suffer from fast recombination. In photoelectrocatalysis, a differential potential is applied to diminish the number of recombinations between the electrons and the holes. This allows an increase in the yield of light's conversion into chemical energy.
When a semiconductor comes into contact with a liquid ( redox species), to maintain electrostatic equilibrium, there will be a charge transfer between the semiconductor and liquid phase if formal redox potential of redox species lies inside semiconductor band gap. At thermodynamic equilibrium, the Fermi level of semiconductor and the formal redox potential of redox species are aligned at the interface between semiconductor and redox species. This introduces an upward band bending in a n-type semiconductor for n-type semiconductor/liquid junction (Figure 1(a)) and a downward band bending in a p-type semiconductor for a p-type semiconductor/liquid junction (Figure 1(b)). This characteristic of semiconductor/liquid junctions is similar to a rectifying semiconductor/metal junction or Schottky junction . Ideally to get a good rectifying characteristics at the semiconductor/liquid interface, the formal redox potential must be close to the valence band of the semiconductor for a n-type semiconductor and close to the conduction band of the semiconductor for a p-type semiconductor. The semiconductor/liquid junction has one advantage over the rectifying semiconductor/metal junction in that the light is able to travel through to the semiconductor surface without much reflection; whereas most of the light is reflected back from the metal surface at a semiconductor/metal junction. Therefore, semiconductor/liquid junctions can also be used as photovoltaic devices similar to solid state p–n junction devices. Both n-type and p-type semiconductor/liquid junctions can be used as photovoltaic devices to convert solar energy into electrical energy and are called photoelectrochemical cells . In addition, a semiconductor/liquid junction could also be used to directly convert solar energy into chemical energy by virtue of photoelectrolysis at the semiconductor/liquid junction.
Semiconductors are usually studied in a photoelectrochemical cell . Different configurations exist with a three electrode device. The phenomenon to study happens at the working electrode WE while the differential potential is applied between the WE and a reference electrode RE (saturated calomel, Ag/AgCl). The current is measured between the WE and the counter electrode CE (carbon vitreous, platinum gauze). The working electrode is the semiconductor material and the electrolyte is composed of a solvent, an electrolyte and a redox specie.
A UV-vis lamp is usually used to illuminate the working electrode. The photoelectrochemical cell is usually made with a quartz window because it does not absorb the light. A monochromator can be used to control the wavelength sent to the WE.
C(diamond), Si, Ge, SiC , SiGe
BN, BP, BAs, AlN, AlP, AlAs, GaN, GaP, GaAs, InN, InP, InAs...
CdS, CdSe, CdTe, ZnO, ZnS, ZnSe, ZnTe, MoS 2 , MoSe 2 , MoTe 2 , WS 2 , WSe 2
TiO 2 , Fe 2 O 3 , Cu 2 O
Methylene blue ...
Very recently scalable all-perovskite based PEC photoelectrochemical system as solar hydrogen panel has been developed with >123 cm2 area. [ 3 ]
Photoelectrochemistry has been intensively studied in the field of hydrogen production from water and solar energy. The photoelectrochemical splitting of water was historically discovered by Fujishima and Honda in 1972 onto TiO 2 electrodes. Recently many materials have shown promising properties to split efficiently water but TiO 2 remains cheap, abundant, stable against photo-corrosion. The main problem of TiO 2 is its bandgap which is 3 or 3.2 eV according to its crystallinity (anatase or rutile). These values are too high and only the wavelength in the UV region can be absorbed. To increase the performances of this material to split water with solar wavelength, it is necessary to sensitize the TiO 2 . Currently Quantum Dots sensitization is very promising but more research is needed to find new materials able to absorb the light efficiently.
Photosynthesis is the natural process that converts CO 2 using light to produce hydrocarbon compounds such as sugar. The depletion of fossil fuels encourages scientists to find alternatives to produce hydrocarbon compounds. Artificial photosynthesis is a promising method mimicking the natural photosynthesis to produce such compounds. The photoelectrochemical reduction of CO 2 is much studied because of its worldwide impact. Many researchers aim to find new semiconductors to develop stable and efficient photo-anodes and photo-cathodes.
Dye-sensitized solar cells or DSSCs use TiO 2 and dyes to absorb the light. This absorption induces the formation of electron-hole pairs which are used to oxidize and reduce the same redox couple, usually I − /I 3 − . Consequently, a differential potential is created which induces a current. | https://en.wikipedia.org/wiki/Photoelectrochemistry |
Photoelectrolysis of water , also known as photoelectrochemical water splitting , occurs in a photoelectrochemical cell when light is used as the energy source for the electrolysis of water, producing dihydrogen which can be used as a fuel. This process is one route to a " hydrogen economy ", in which hydrogen fuel is produced efficiently and inexpensively from natural sources without using fossil fuels . [ 1 ] [ 2 ] In contrast, steam reforming usually or always uses a fossil fuel to obtain hydrogen. Photoelectrolysis is sometimes known colloquially as the hydrogen holy grail for its potential to yield a viable alternative to petroleum as a source of energy ; such an energy source would supposedly come without the sociopolitically undesirable effects of extracting and using petroleum.
Mechanism
The PEC cell primarily consists of three components: the photoelectrode the electrolyte and a counter electrode . The semiconductor crucial to this process, absorbs sunlight , initiating electron excitation and subsequent water molecule splitting into hydrogen and oxygen .
Photoanode Reaction (Oxygen Evolution): H2O → 2H++1 2O2+ 2e−
Photocathode Reaction (Hydrogen Evolution): 2H++ 2e− → H2
These half-reactions show the fundamental chemistry involved in photoelectrolysis, where the photoanode facilitates oxygen evolution and the photocathode supports hydrogen evolution.
Current Research and Technological Advances
Recent advancements have focused on enhancing the semiconductor materials and cell design to improve the solar-to-hydrogen (STH) conversion efficiency, currently between 8%-14%, with a theoretical maximum around 42%. [ 3 ] Innovations include:
Semiconductor Materials: Research emphasizes the importance of semiconductors with smaller band gaps (under 2.1 eV) which are more effective at utilizing broader light spectra, thus improving efficiency. [ 4 ]
Cocatalysts: The use of transition metal-based cocatalysts has been pivotal in enhancing charge separation and reducing overpotential, thereby improving the overall efficiency of the water-splitting reaction. [ 5 ]
Nanoporous Materials: These materials have been utilized to increase the surface area for electron transport, significantly boosting the efficiency of photoelectrochemical systems. [ 6 ]
Advantages: Utilizing sunlight, photoelectrolysis serves as a renewable method for hydrogen production, offering scalability and adaptability across different geographical conditions.
Challenges: The primary hurdles include the still-developing efficiency of the process and the intermittent nature of solar energy, which can affect consistent hydrogen production. Additionally, finding durable and efficient materials for long-term operation remains a challenge. [ 7 ] [ 8 ]
Role in the Hydrogen Economy
As part of a sustainable hydrogen economy, photoelectrolysis presents a promising avenue for clean hydrogen production. Although currently more expensive than traditional methods like steam methane reforming, the potential for technological advancements could make it more economically viable. [ 9 ]
Conclusion and Future Prospects
The ongoing development in materials science and cell design is likely to enhance the viability of photoelectrolysis , making it a key player in the future landscape of renewable energy technologies. Continued research and investment in overcoming existing challenges will be crucial to harness the full potential of this technology.
Devices based on hydrogenase have also been investigated. [ 10 ]
This electrochemistry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Photoelectrolysis_of_water |
Photoelectron photoion coincidence spectroscopy ( PEPICO ) is a combination of photoionization mass spectrometry and photoelectron spectroscopy . [ 1 ] It is largely based on the photoelectric effect . Free molecules from a gas-phase sample are ionized by incident vacuum ultraviolet (VUV) radiation. In the ensuing photoionization , a cation and a photo electron are formed for each sample molecule. The mass of the photoion is determined by time-of-flight mass spectrometry , whereas, in current setups, photoelectrons are typically detected by velocity map imaging . Electron times-of-flight are three orders of magnitude smaller than those of ions, which allows electron detection to be used as a time stamp for the ionization event, starting the clock for the ion time-of-flight analysis. In contrast with pulsed experiments, such as REMPI , in which the light pulse must act as the time stamp, this allows to use continuous light sources, e.g. a discharge lamp or a synchrotron light source. No more than several ion–electron pairs are present simultaneously in the instrument, and the electron–ion pairs belonging to a single photoionization event can be identified and detected in delayed coincidence.
Brehm and von Puttkammer published the first PEPICO study on methane in 1967. [ 2 ] In the early works, a fixed energy light source was used, and the electron detection was carried out using retarding grids or hemispherical analyzers : the mass spectra were recorded as a function of electron energy. Tunable vacuum ultraviolet light sources were used in later setups, [ 3 ] [ 4 ] in which fixed, mostly zero kinetic energy electrons were detected, and the mass spectra were recorded as a function of photon energy. Detecting zero kinetic energy or threshold electrons in threshold photoelectron photoion coincidence spectroscopy, TPEPICO, has two major advantages. Firstly, no kinetic energy electrons are produced in energy ranges with poor Franck–Condon factors in the photoelectron spectrum, but threshold electrons can still be emitted via other ionization mechanisms. [ 5 ] Secondly, threshold electrons are stationary and can be detected with higher collection efficiencies, thereby increasing signal levels.
Threshold electron detection was first based on line-of-sight, i.e. a small positive field was applied towards the electron detector, and kinetic energy electrons with perpendicular velocities are stopped by small apertures. [ 6 ] The inherent compromise between resolution and collection efficiency was resolved by applying velocity map imaging [ 7 ] conditions. [ 8 ] Most recent setups offer meV or better (0.1 kJ mol −1 ) resolution both in terms of photon energy and electron kinetic energy. [ 9 ] [ 10 ]
The 5–20 eV (500–2000 kJ mol −1 , λ = 250–60 nm) energy range is of prime interest in valence photoionization. Widely tunable light sources are few and far between in this energy range. The only laboratory based one is the H 2 discharge lamp , which delivers quasi-continuous radiation up to 14 eV. [ 11 ] The few high resolution laser setups for this energy range are not easily tunable over several eV. Currently, VUV beamlines at third generation synchrotron light sources are the brightest and most tunable photon sources for valence ionization. The first high energy resolution PEPICO experiment at a synchrotron was the pulsed-field ionization setup at the Chemical Dynamics Beamline of the Advanced Light Source . [ 12 ]
The primary application of TPEPICO is the production of internal energy selected ions to study their unimolecular dissociation dynamics as a function of internal energy. The electrons are extracted by a continuous electric field and are velocity map imaged depending on their initial kinetic energy. Ions are accelerated in the opposite direction and their mass is determined by time-of-flight mass spectrometry. The data analysis yields dissociation thresholds, which can be used to derive new thermochemistry for the sample. [ 13 ]
The electron imager side can also be used to record photoionization cross sections, photoelectron energy and angular distributions. With the help of circularly polarized light, photoelectron circular dichroism (PECD) can be studied. [ 14 ] A thorough understanding of PECD effects could help explain the homochirality of life. [ 15 ] Flash pyrolysis can also be used to produce free radicals or intermediates, which are then characterized to complement e.g. combustion studies. [ 16 ] [ 17 ] In such cases, the photoion mass analysis is used to confirm the identity of the radical produced.
Photoelectron photoion coincidence spectroscopy can be used to shed light on reaction mechanisms, [ 18 ] and can also be generalized to study double ionization in (photoelectron) photoion photoion coincidence ((PE)PIPICO), [ 19 ] fluorescence using photoelectron photon coincidence (PEFCO), [ 20 ] or photoelectron photoelectron coincidence (PEPECO). [ 21 ] Times-of-flight of photoelectrons and photoions can be combined in a form of a map, which visualizes the dynamics of the dissociative ionization process. [ 22 ] Ion–electron velocity vector correlation functions can be obtained in double imaging setups, in which the ion detector also delivers position information. [ 23 ]
The relatively low intensity of the ionizing VUV radiation guarantees one-photon processes, in other words only one, fixed energy photon will be responsible for photoionization. The energy balance of photoionization comprises the internal energy and the adiabatic ionization energy of the neutral as well as the photon energy, the kinetic energy of the photoelectron and of the photoion. Because only threshold electrons are considered and the conservation of momentum holds, the last two terms vanish, and the internal energy of the photoion is known:
Scanning the photon energy corresponds to shifting the internal energy distribution of the parent ion. The parent ion sits in a potential energy well, in which the lowest energy exit channel often corresponds to the breaking of the weakest chemical bond , resulting in the formation of a fragment or daughter ion. A mass spectrum is recorded at every photon energy, and the fractional ion abundances are plotted to obtain the breakdown diagram. At low energies no parent ion is energetic enough to dissociate, and the parent ion corresponds to 100% of the ion signal. As the photon energy is increased, a certain fraction of the parent ions (in fact according to the cumulative distribution function of the neutral internal energy distribution) still has too little energy to dissociate, but some do. The parent ion fractional abundances decrease, and the daughter ion signal increases. At the dissociative photoionization threshold, E 0 , all parent ions, even the ones with initially 0 internal energy, can dissociate, and the daughter ion abundance reaches 100% in the breakdown diagram.
If the potential energy well of the parent ion is shallow and the complete initial thermal energy distribution is broader than the depth of the well, the breakdown diagram can also be used to determine adiabatic ionization energies. [ 24 ]
The data analysis becomes more demanding if there are competing parallel dissociation channels or if the dissociation at threshold is too slow to be observed on the time scale (several μs) of the experiment. In the first case, the slower dissociation channel will appear only at higher energies, an effect called competitive shift, whereas in the second, the resulting kinetic shift means that the fragmentation will only be observed at some excess energy, i.e. only when it is fast enough to take place on the experimental time scale. When several dissociation steps follow sequentially, the second step typically occurs at high excess energies: the system has much more internal energy than needed for breaking the weakest bond in the parent ion. Some of this excess energy is retained as internal energy of the fragment ion, some may be converted into the internal energy of the leaving neutral fragment (invisible to mass spectrometry) and the rest is released as kinetic energy, in that the fragments fly apart at some non-zero velocity.
More often than not, dissociative photoionization processes can be described within a statistical framework, similarly to the approach used in collision-induced dissociation experiments. If the ergodic hypothesis holds, the system will explore each region of the phase space with a probability according to its volume. A transition state (TS) can then be defined in the phase space, which connects the dissociating ion with the dissociation products, and the dissociation rates for the slow or competing dissociations can be expressed in terms of the TS phase space volume vs. the total phase space volume. The total phase space volume is calculated in a microcanonical ensemble using the known energy and the density of states of the dissociating ion. There are several approaches how to define the transition state, the most widely used being RRKM theory . The unimolecular dissociation rate curve as a function of energy, k ( E ), vanishes below the dissociative photoionization energy, E 0 . [ 25 ]
Statistical theory can also be used in the microcanonical formalism to describe the excess energy partitioning in sequential dissociation steps, as proposed by Klots [ 26 ] for a canonical ensemble. Such a statistical approach was used for more than a hundred systems to determine accurate dissociative photoionization onsets, and derive thermochemical information from them. [ 27 ]
Furthermore, algorithms based on probabilistic Bayesian analyses are known to considerably reduce systematic biases induced by false coincidences. The intensity of these false coincidences can big strong enough to appear as a separate peaks in the signal and complicate the analysis of the spectra. [ 28 ]
Dissociative photoionization processes can be generalized as:
If the enthalpies of formation of two of the three species are known, the third can be calculated with the help of the dissociative photoionization energy, E 0 , using Hess's law . This approach was used, for instance, to determine the enthalpy of formation of the methyl ion , CH 3 + , [ 29 ] which in turn was used to obtain the enthalpy of formation of iodomethane , CH 3 I as 15.23 kJ mol −1 , with an uncertainty of only 0.3 kJ mol −1 . [ 30 ]
If different sample molecules produce shared fragment ions, a complete thermochemical chain can be constructed, as was shown for some methyl trihalides, [ 31 ] where the uncertainty in e.g. the CHCl 2 Br , ( Halon-1021 ) heat of formation was reduced from 20 to 2 kJ mol −1 . Furthermore, dissociative photoionization energies can be combined with calculated isodesmic reaction energies to build thermochemical networks. Such an approach was used to revise primary alkylamine enthalpies of formation. [ 32 ] | https://en.wikipedia.org/wiki/Photoelectron_photoion_coincidence_spectroscopy |
In chronobiology , photoentrainment refers to the process by which an organism's biological clock, or circadian rhythm , synchronizes to daily cycles of light and dark in the environment. The mechanisms of photoentrainment differ from organism to organism. [ 1 ] Photoentrainment plays a major role in maintaining proper timing of physiological processes and coordinating behavior within the natural environment. [ 2 ] [ 3 ] Studying organisms’ different photoentrainment mechanisms sheds light on how organisms may adapt to anthropogenic changes to the environment. [ 4 ] [ 5 ]
24-hour physiological rhythms, known now as circadian rhythms, were first documented in 1729 by Jean Jacques d'Ortous de Mairan , a French astronomer who observed that mimosa plants ( Mimosa pudica ) would orient themselves to be toward the position of the sun despite being in a dark room. [ 6 ] That observation spawned the field of chronobiology, which seeks to understand the mechanisms that underlie endogenously expressed daily rhythms in organisms from cyanobacteria to mammals , which includes understanding and modeling the process of photoentrainment.
Two prominent 20th century chronobiologists, Jürgen Aschoff and Colin Pittendrigh , both worked throughout the 1960s to model of the process of photoentrainment, and despite examining the same subject, they arrived at different conclusions. Aschoff proposed a parametric model of entrainment, which assumed that organisms entrained to environmental timing cues (often referred to as zeitgebers , or "time givers" in German) gradually, changing their internal "circadian" period to be greater or less than 24 hours until it became aligned with the zeitgeber time. [ 7 ] Conversely, Pittendreigh proposed a non-parametric model of entrainment, which assumed that organisms adjusted their internal clocks instantaneously when confronted with a light signal, or zeitgeber, that was out of sync with when their internal circadian time expected to see light. [ 7 ]
Pittendrigh developed his model based on the phase-response curve , which visualizes the effect of short light pulses on organisms that were free-running (not entrained to a zeitgeber). Pittendrigh determined that an organism’s response to light depended on when the signal was presented. It was determined that exposure to light in the organism’s early subjective night (the early portion of an organism’s “normal” dark period) produced a delay in onset of activity in the following day (phase delay). Additionally, light exposure in the late subjective night resulted in advanced activity in the following day (phase advance). [ 8 ] The phase changes experienced by the organism could be represented by a phase-response curve consisting of portions including the advance zone, delay zone, and dead zone. This model became widely accepted over Aschoff's parametric model, but it is still unclear which model most effectively explains the process of photoentrainment. [ 7 ]
Light intensity in conditions of constant light was found to also modulate an organism’s response. Exposure to higher-intensity light was found to either extend or shorten an organism’s period depending on species, dubbed Aschoff’s rule . [ 8 ]
The molecular mechanism for photoentrainment in multicellular organisms such as in fungi and animals has been linked to the transcription-translation feedback loop (TTFL) , where translated protein products influence gene transcription. [ 9 ] The TTFL is composed of both a positive and negative arm, where the positive arm proteins promote transcription of negative arm genes while the negative arm proteins inhibit the activity of the positive arm. The TTFL has been found to be autonomous and have a period of roughly 24 hours. [ 9 ] Components of the positive and negative arms differ by organism but in mammals positive arm components include CLOCK and BMAL1 while negative arm components include PER1 , PER2 , CRY1 , and CRY2 . [ 9 ] In the case of many mammals, light signals detected by photoreceptors in the eye send signals to the mammalian master clock located in the suprachiasmatic nucleus (SCN) which then affects the timing of the various positive and negative arms. [ 10 ] This results in changes in the expression of the various clock proteins is what allows the organism to undergo photoentrainment. [ 9 ]
In single celled organisms, circadian rhythms are believed to be generated without the use of a TTFL, but rather with a 3 protein complex called the KaiABC Complex . The mechanism of entrainment in this system is known to be controlled by various proteins. [ 11 ]
Entrainment to environmental cycles is a trait with advantages, and is thus found in nearly all organisms. Many ecological relationships such as predator-prey interactions, pollinator behaviors, migration timing all require the synchronization of an organism’s biological clock with the 24-hour rhythm of planet. [ 12 ] Individuals who are not entrained, or in other words are not synchronized to the cycle of day and night, may miss out on feeding opportunities, on mating opportunities, etc, which may impact their chances of survival. The known models of both the circadian clock and mechanism of entrainment vary in different organisms across domain and kingdom, and the behavioral significance of entrainment vary as well.
Mammals, in order to survive, must wake up at specific times in order to secure meals and avoid becoming prey themselves. In mammals, the external light dark cycle entrains a master clock, which then synchronizes various circadian oscillators throughout the body known as peripheral clocks. [ 8 ] The photopigment melanopsin is present in certain retinal ganglion cells called intrinsically photosensitive retinal ganglion cells (ipRGCs) , which send signals to the suprachiasmatic nucleus (SCN) , the mammalian master clock that controls circadian rhythms throughout the body. [ 10 ] In addition to melanopsin, studies have determined from using melanopsin-knockout mice that rods and cones can also play a role in the photic responses of the SCN. [ 10 ] Enucleation (removal of the eye) in mammals resulted in free-running rhythms indicating the eye is necessary for photoentrainment. [ 13 ]
Photoautotrophic cyanobacteria depend on sunlight for energy, so a failure to anticipate nighttime would threaten their ability to survive and reproduce. They need sufficient glycogen reserves to last through the night. [ 14 ] Photoentrainment also allows cyanobacteria to respond to light properly so as to prepare their photosynthetic apparatus for dawn when blue light is prominent. Appropriate synchronization to light also facilitates the temporal separation between oxygen-sensitive nitrogen fixation and oxygen-generating photosynthesis, lest the latter would inhibit the former. [ 15 ]
Cyanobacteria can entrain to light pulses at a single cell level, but not all strains of the cyanobacteria entrain to light. While some cyanobacteria show rhythmic photosynthesis in constant light conditions, others exhibit constitutive photosynthetic activity in constant light conditions, measured by the levels of photosynthetic oxygen evolution. [ 16 ]
Fungi, like mammals, use a TTFL-driven clock, and therefore their entrainment involves adjustments to the concentrations of certain clock proteins based on environmental stimuli. More specifically, blue light induces transcription of frequency gene frq via photoreceptor WC‐1 and its partner WC‐2 , and the protein product FRQ subsequently regulates the activity of WC-1 and WC-2 via phosphorylation . [ 17 ] [ 18 ] Ultraviolet radiation and other light wavelengths can cause DNA damage and mutations in fungi. Since DNA replication requires chromosome unwinding and exposes the DNA molecule to UV damage, fungi need to schedule DNA replication during the time of the day with the lowest UV radiation. [ 19 ]
Photoentrainment has numerous clinical implications. Light therapy can be used to treat a number of afflictions, such as jet lag , seasonal affective disorder (SAD) , sleep disorders , dementia , bipolar disorder and so on.
Jet lag occurs when one’s circadian rhythm is out of sync with the environment, and this is usually caused by travel across time zones. People with jet lag experience symptoms such as fatigue, insomnia, headaches, etc. Light therapy has been hypothesized to help mitigate these symptoms. A study has shown that light therapy depending on the direction of one’s travel can be beneficial; [ 20 ] eastward travelers received phase advancement light therapy before their flight, and westward travelers received phase delay light therapy before their flight. [ 21 ]
Disruption of an individual's dopamine activity due to the lack of light in the winter months is thought to be a cause of seasonal affective disorder (SAD) . Thus, it was hypothesized that light therapy could help increase one’s retinal dopamine activity by providing light that is no longer attainable in the environment. [ 22 ] The practice of phototherapy was started in 1984. Traditionally, one receiving phototherapy for SAD will get a morning treatment of 5000 lux per hour. The effect of this treatment is that one’s circadian rhythm will be advanced. This is done in order to counteract the phase delay during winter. [ 21 ]
Light therapy can also be used to treat circadian rhythm sleep disorders. These disorders are caused by discrepancies between one’s circadian rhythm and the light/dark cycle of the environment. People with a sleep disorder experience insomnia or hypersomnia . There are a number of sleep disorders that light therapy are effective in treating, such as delayed sleep phase type (DSPT) and advanced sleep phase type (ASPT) . DSPT occurs when one sleeps late and is unable to wake up early, resulting in a lack of entrainment to a typical working schedule. There are a number of methods to help resolve DSPT, including white light exposure in the morning and light restriction after 4:00 p.m., light masks, and blue light exposure in the morning. [ 23 ] APST is marked by both sleeping and waking up early and is usually seen in older adults. Light therapy in the evening (that is administered before one’s body temperature reaches its low point) may help in inducing phase delay in these patients. [ 21 ]
Dementia is a decline in mental functioning that results in impairment in memory, thinking, decision-making, etc. Dementia is associated with disruptions in one’s sleep-wake cycle. Thus, light therapy may aid in the improvement of the disrupted sleep-wake cycle. [ 24 ] If true, this will result in better sleep along with improved functioning. Studies have looked into light therapy as a treatment for dementia, however, the results have been conflicting. One study found that morning light therapy helped dementia patients with their sleep, yet functioning did not improve. In other trials, neither sleep nor behavior seemed to improve. Therefore, more research should be done in order to clarify the potential of light therapy as a successful treatment technique for dementia. [ 21 ]
Bipolar disorder is a mental disorder characterized by sudden shifts of behavior, emotions, energy, and so on, and these shifts can be called bipolar episodes. People with bipolar disorder can experience both manic episodes and depressive episodes. Bipolar disorder is difficult to treat, so light therapy was looked at as a potential solution. One relevant study was a meta-analysis of light therapy trials for bipolar disorder. The findings overall were encouraging as well as non-conclusive. The findings indicate that light therapy can limit symptoms and improve clinical response. [ 25 ] Further, a different meta-analysis found that light therapy helped patients with their symptoms and did not cause any negative effects. However, light therapy did not impact bipolar disorder remission rates. [ 26 ] | https://en.wikipedia.org/wiki/Photoentrainment_(chronobiology) |
Photoexcitation is the production of an excited state of a quantum system by photon absorption. The excited state originates from the interaction between a photon and the quantum system . Photons carry energy that is determined by the wavelengths of the light that carries the photons. [ 1 ] Objects that emit light with longer wavelengths, emit photons carrying less energy. In contrast to that, light with shorter wavelengths emit photons with more energy. When the photon interacts with a quantum system, it is therefore important to know what wavelength one is dealing with. A shorter wavelength will transfer more energy to the quantum system than longer wavelengths.
On the atomic and molecular scale photoexcitation is the photoelectrochemical process of electron excitation by photon absorption, when the energy of the photon is too low to cause photoionization . The absorption of the photon takes place in accordance with Planck's quantum theory.
Photoexcitation plays a role in photoisomerization and is exploited in different techniques:
On the nuclear scale photoexcitation includes the production of nucleon and delta baryon resonances in nuclei. | https://en.wikipedia.org/wiki/Photoexcitation |
Photofermentation is the fermentative conversion of organic substrate to biohydrogen manifested by a diverse group of photosynthetic bacteria by a series of biochemical reactions involving three steps similar to anaerobic conversion . Photofermentation differs from dark fermentation because it only proceeds in the presence of light .
For example, photo-fermentation with Rhodobacter sphaeroides SH2C (or many other purple non-sulfur bacteria [ 1 ] ) can be employed to convert small molecular fatty acids into hydrogen [ 2 ] and other products.
Phototropic bacteria produce hydrogen gas via photofermentation, where the hydrogen is sourced from organic compounds. [ 4 ]
C 6 H 12 O 6 + 6 H 2 O → h v 6 CO 2 + 12 H 2 {\displaystyle {\ce {C6H12O6 + 6H2O ->[{hv}] 6CO2 + 12H2}}} [ 4 ]
Photolytic producers are similar to phototrophs, but source hydrogen from water molecules that are broken down as the organism interacts with light. [ 4 ] Photolytic producers consist of algae and certain photosynthetic bacteria. [ 4 ]
12 H 2 O → h v 12 H 2 + 6 O 2 {\displaystyle {\ce {12H2O ->[{hv}] 12H2 + 6O2}}} (algae) [ 4 ]
CO + H 2 O → h v H 2 + CO 2 {\displaystyle {\ce {CO + H2O ->[{hv}] H2 + CO2}}} (photolytic bacteria) [ 4 ]
Photofermentation via purple nonsulfur producing bacteria has been explored as a method for the production of biofuel. [ 5 ] The natural fermentation product of these bacteria, hydrogen gas, can be harnessed as a natural gas energy source. [ 6 ] [ 7 ] Photofermentation via algae instead of bacteria is used for bioethanol production, among other liquid fuel alternatives. [ 8 ]
The bacteria and their energy source are held in a bioreactor chamber that is impermeable to air and oxygen free. [ 7 ] The proper temperature for the bacterial species is maintained in the bioreactor. [ 7 ] The bacteria are sustained with a carbohydrate diet consisting of simple saccharide molecules. [ 9 ] The carbohydrates are typically sourced from agricultural or forestry waste. [ 9 ]
In addition to wild type forms of Rhodopseudomonas palustris , s cientists have used genetically modified forms to produce hydrogen as well. [ 5 ] Other explorations include expanding the bioreactor system to hold a combination of bacteria, algae or cyanobacteria . [ 7 ] [ 9 ] Ethanol production is performed by the algae Chlamydomonas reinhardtii , among other species, in cycling light and dark environments. [ 8 ] The cycling of light and dark environments has also been explored with bacteria for hydrogen production, increasing hydrogen yield. [ 10 ]
The bacteria are typically fed with broken down agricultural waste or undesired crops, such as water lettuce or sugar beet molasses. [ 11 ] [ 5 ] The high abundance of such waste ensures the stable food source for the bacteria and productively uses human-produced waste. [ 5 ] In comparison with dark fermentation , photofermentation produces more hydrogen per reaction and avoids the acidic end products of dark fermentation. [ 12 ]
The primary limitations of photofermentation as a sustainable energy source stem from the precise requirements of maintaining the bacteria in the bioreactor. [ 7 ] Researchers have found it difficult to maintain a constant temperature for the bacteria within the bioreactor. [ 7 ] Furthermore, the growth media for the bacteria must be rotated and refreshed without introducing air to the bioreactor system, complicating the already expensive bioreactor set up. [ 7 ] [ 9 ] | https://en.wikipedia.org/wiki/Photofermentation |
Photofission is a process in which a nucleus , after absorbing a gamma ray , undergoes nuclear fission and splits into two or more fragments.
The reaction was discovered in 1940 by a small team of engineers and scientists operating the Westinghouse Atom Smasher at the company's Research Laboratories in Forest Hills, Pennsylvania . [ 1 ] They used a 5 MeV proton beam to bombard fluorine and generate high-energy photons , which then irradiated samples of uranium and thorium . [ 2 ]
Gamma radiation of modest energies, in the low tens of MeV, can induce fission in traditionally fissile elements such as the actinides thorium , uranium , [ 3 ] plutonium , and neptunium . [ 4 ] Experiments have been conducted with much higher energy gamma rays, finding that the photofission cross section varies little within ranges in the low GeV range. [ 5 ]
Baldwin et al made measurements of the yields of photo-fission in uranium and thorium together with a search for photo-fission in other heavy elements, using continuous x-rays from a 100-MeV betatron . Fission was detected in the presence of an intense background of x-rays by a differential ionization chamber and linear amplifier, the substance investigated being coated on an electrode of one chamber. They deduced the maximum cross section being of the order of 5×10 −26 cm 2 for uranium and half that for thorium. In the other elements studied, the cross section must be below 10 −29 cm 2 . [ 6 ]
Photodisintegration (also called phototransmutation) is a similar but different physical process, in which an extremely high energy gamma ray interacts with an atomic nucleus and causes it to enter an excited state , which immediately decays by emitting a subatomic particle . | https://en.wikipedia.org/wiki/Photofission |
Photofragment ion imaging or, more generally, Product Imaging is an experimental technique for making measurements of the velocity of product molecules or particles following a chemical reaction or the photodissociation of a parent molecule. [ 1 ] The method uses a two-dimensional detector, usually a microchannel plate , to record the arrival positions of state-selected ions created by resonantly enhanced multi-photon ionization ( REMPI ). The first experiment using photofragment ion imaging was performed by David W Chandler and Paul L Houston in 1987 on the phototodissociation dynamics of methyl iodide ( iodomethane , CH 3 I). [ 2 ]
Many problems in molecular reaction dynamics demand the simultaneous measurement of a particle's speed and angular direction; the most demanding require the measurement of this velocity in coincidence with internal energy. Studies of molecular reactions, energy transfer processes and photodissociation can only be understood completely if the internal energies and velocities of all products can be specified. [ 3 ] Product imaging approaches this goal by determining the three-dimensional velocity distribution of one state-selected product of the reaction. For a reaction producing two products, because the speed of the unobserved sibling product is related to that of the measured product through conservation of momentum and energy, the internal state of the sibling can often be inferred.
A simple example illustrates the principle. Ozone (O 3 ) dissociates following ultraviolet excitation to yield an oxygen atom and an oxygen molecule. Although there are (at least) two possible channels, the principle products are O( 1 D) and O 2 ( 1 Δ); that is, both the atom and the molecule are in their first excited electronic state (see atomic term symbol and molecular term symbol for further explanation). At a wavelength of 266 nm, the photon has enough energy to dissociate ozone to these two products, to excite the O 2 ( 1 Δ) vibrationally to a maximum level of v = 3, and to provide some energy to the recoil velocity between the two fragments. Of course, the more energy that is used to excite the O 2 vibrations, the less will be available for the recoil. The O(1D) atom's REMPI, combined with the product imaging technique, yields an image that can be used to calculate the O(1D) three-dimensional velocity distribution. A slice through this cylindrically symmetric distribution is shown in the figure, where an O( 1 D) atom that has zero velocity in the center-of-mass frame would arrive at the center of the figure.
Note that there are four rings, corresponding to four main groups of O( 1 D) speeds. These correspond to O2(1) production at vibrational levels v = 0, 1, 2, and 3. The ring corresponding to v = 0 is the outer one, since production of the O 2 ( 1 Δ) in this level leaves the most energy for recoil between the O( 1 D) and O 2 ( 1 Δ). Thus, the product imaging technique immediately shows the vibrational distribution of the O 2 ( 1 Δ).
Note that the angular distribution of the O( 1 D) is not uniform – more of the atoms fly toward the north or south pole than to the equator. In this case, the north-south axis is parallel to the polarization direction of the light that dissociated the ozone. Ozone molecules that absorb the polarized light are those in a particular alignment distribution, with a line connecting the end oxygen atoms in O 3 roughly parallel to the polarization. Because the ozone dissociates more rapidly than it rotates, the O and O 2 products recoil predominantly along this polarization axis. But there is more detail as well. A close examination shows that the peak in the angular distribution is not actually exactly at the north or south pole, but rather at an angle of about 45 degrees. This has to do with the polarization of the laser that ionizes the O( 1 D), and can be analyzed to show that the angular momentum of this atom (which has 2 units) is aligned relative to the velocity of recoil. More detail can be found elsewhere. [ 4 ]
There are other dissociation channels available to ozone following excitation at this wavelength. One produces O( 3 P) and O 2 ( 3 Σ), indicating that both the atom and molecule are in their ground electronic state. The image above has no information on this channel, since only the O( 1 D) is probed. However, by tuning the ionization laser to the REMPI wavelength of O( 3 P) one finds a completely different image that provides information about the internal energy distribution of O 2 ( 3 Σ). [ 5 ]
In the original product imaging paper, the positions of the ions are imaged onto a two-dimensional detector. A photolysis laser dissociates methyl iodide (CH 3 I), while an ionization laser is used REMPI to ionize a particular vibrational level of the CH 3 product. Both lasers are pulsed, and the ionization laser is fired at a delay short enough that the products have not moved appreciably. Because ejection of an electron by the ionization laser does not change the recoil velocity of the CH 3 fragment, its position at any time following the photolysis is nearly the same as it would have been as a neutral. The advantage of converting it to an ion is that, by repelling it with a set of grids (represented by the vertical solid lines in the figure), one can project it onto a two-dimensional detector. The detector is a double microchannel plate consisting of two glass discs with closely packed open channels (several micrometres in diameter). A high voltage is placed across the plates. As an ion hits inside a channel, it ejects secondary electrons that are then accelerated into the walls of the channel. Since multiple electrons are ejected for each one that hits the wall, the channels act as individual particle multipliers. At the far end of the plates approximately 10 7 electrons leave the channel for each ion that entered. Importantly, they exit from a spot right behind where the ion entered. The electrons are then accelerated to a phosphor screen, and the spots of light are recorded with a gated charge-coupled device (CCD) camera. The image collected from each pulse of the lasers is then sent to a computer, and the results of many thousands of laser pulses are accumulated to provide an image such as the one for ozone shown previously.
In this position-sensing version of product imaging, the position of the ions as they hit the detector is recorded. One can imagine the ions produced by the dissociation and ionization lasers as expanding outward from the center-of-mass with a particular distribution of velocities. It is this three-dimensional object that we wish to detect. Since the ions created should be of the same mass, they will all be accelerated uniformly toward the detector. It takes very little time for the whole three-dimensional object to be crushed into the detector, so the position of an ion on the detector relative to the center position is given simply by v Δt, where v is its velocity and Δt is the time between when the ions were made and when they hit the detector. The image is thus a two-dimensional projection of the desired three-dimensional velocity distribution. Fortunately, for systems with an axis of cylindrical symmetry parallel to the surface of the detector, the three-dimensional distribution may be recovered from the two-dimensional projection by the use of the inverse Abel transform . The cylindrical axis is the axis containing the polarization direction of the dissociating light. It is important to note that the image is taken in the center-of-mass frame; no transformation, other than from time to speed, is needed.
A final advantage of the technique should also be mentioned: ions of different masses arrive at the detector at different times. This differential arises because each ion is accelerated to the same total energy, E, as it traverses the electric field, but the acceleration speed, v z , varies as E = ½ mv z 2 . Thus, v z varies as the reciprocal of the square root of the ion mass, or the arrival time is proportional to the square root of the ion mass. In a perfect experiment, the ionization laser would ionize only the products of the dissociation, and those only in a particular internal energy state. But the ionization laser, and perhaps the photolysis laser, can create ions from other material, such as pump oil or other impurities. The ability to selectively detect a single mass by gating the detector electronically is thus an important advantage in reducing noise.
A major improvement to the product imaging technique was achieved by Eppink and Parker. [ 6 ] A difficulty that limits the resolution in the position-sensing version is that the spot on the detector is no smaller than the cross-sectional area of the ions excited. For example, if the volume of interaction of the molecular beam, photolysis laser, and ionization laser is, say 1 mm x 1 mm x 1 mm, then the spot for an ion moving with a single velocity would still span 1mm x 1mm at the detector. This dimension is much larger than the limit of a channel width (10 μm) and is substantial compared to the radius of a typical detector (25 mm). Without some further improvement, the velocity resolution for a position-sensing apparatus would be limited to about one part in twenty-five. Eppink and Parker found a way around this limit. Their version of the product imaging technique is called velocity map imaging.
Velocity map imaging is based on the use of an electrostatic lens to accelerate the ions toward the detector. When the voltages are properly adjusted, this lens has the advantage that it focuses ions with the same velocity to a single spot on the detector regardless where the ion was created. This technique thus overcomes the blurring caused by the finite overlap of the laser and molecular beams.
In addition to ion imaging, velocity map imaging is also used for electron kinetic energy analysis in photoelectron photoion coincidence spectroscopy .
Chichinin, Einfeld, Maul, and Gericke [ 7 ] replaced the phosphor screen by a time-resolving delay line anode in order to be able to measure all three components of the initial product momentum vector simultaneously for each individual product particle arriving at the detector. This technique allows one to measure the three-dimensional product momentum vector distribution without having to rely on mathematical reconstruction methods which require the investigated systems to be cylindrically symmetric. Later, velocity mapping was added to 3D imaging. [ 8 ] 3D techniques have been used to characterize several elementary photodissociation processes and bimolecular chemical reactions. [ 9 ]
Chang et al. , [ 10 ] realized that further increase in resolution could be gained if one carefully analyzed the results of each spot detected by the CCD camera. Under the microchannel plate amplification typical in most laboratories, each such spot was 5-10 pixels in diameter. By programming a microprocessor to examine each of up to 200 spots per laser shot to determine the center of the distribution of each spot, Chang et al. were able to further increase the velocity resolution to the equivalent of one pixel out of the 256-pixel radius of the CCD chip.
DC slice imaging is a developed version of traditional velocity map imaging technique which was developed in Suits group. In DC slicing, the ion cloud is allowed to expand by a weaker field in the ionization region. By this the arrival time is expanded to several hundred ns. By a fast transistor switch, one is able to select the central part of the ion cloud (Newton sphere). This central slice has the full velocity and angular distribution. A reconstruction by mathematical methods is not necessary. (D. Townsend, S. K. Lee and A. G. Suits, “Orbital polarization from DC slice imaging: S(1D) alignment in the photodissociation of ethylene sulfide,” Chem. Phys., 301, 197 (2004).)
Product imaging of positive ions formed by REMPI detection is only one of the areas where charged particle imaging has become useful. Another area was in the detection of electrons. The first ideas along these lines seem to have an early history. Demkov et al. were perhaps the first to propose a "photoionization microscope". [ 11 ] They realized that trajectories of an electron emitted from an atom in different directions may intersect again at a large distance from the atom and create an interference pattern. They proposed building an apparatus to observe the predicted rings. Blondel et al. eventually realized such a "microscope" and used it to study the photodetachment of Br − . [ 12 ] [ 13 ] It was Helm and co-workers, however, who were the first to create an electron imaging apparatus. [ 14 ] The instrument is an improvement on previous photoelectron spectrometers in that it provides information on all energies and all angles of the photoelectrons for each shot of the laser. Helm and his co-workers have now used this technique to investigate the ionization of Xe, Ne, H 2 , and Ar. In more recent examples, Suzuki, [ 15 ] Hayden, [ 16 ] and Stolow [ 17 ] have pioneered the use of femtosecond excitation and ionization to follow excited state dynamics in larger molecules. | https://en.wikipedia.org/wiki/Photofragment-ion_imaging |
Photogeochemistry merges photochemistry and geochemistry into the study of light-induced chemical reactions that occur or may occur among natural components of Earth's surface. The first comprehensive review on the subject was published in 2017 by the chemist and soil scientist Timothy A Doane, [ 1 ] but the term photogeochemistry appeared a few years earlier as a keyword in studies that described the role of light-induced mineral transformations in shaping the biogeochemistry of Earth; [ 2 ] this indeed describes the core of photogeochemical study, although other facets may be admitted into the definition.
The context of a photogeochemical reaction is implicitly the surface of Earth, since that is where sunlight is available (although other sources of light such as chemiluminescence would not be strictly excluded from photogeochemical study). Reactions may occur among components of land such as rocks , soil and detritus ; components of surface water such as sediment and dissolved organic matter; and components of the atmospheric boundary layer directly influenced by contact with land or water, such as mineral aerosols and gases. Visible and medium- to long-wave ultraviolet radiation is the main source of energy for photogeochemical reactions; wavelengths of light shorter than about 290 nm are completely absorbed by the present atmosphere, [ 3 ] [ 4 ] [ 5 ] and are therefore practically irrelevant, except in consideration of atmospheres different from that of Earth today.
Photogeochemical reactions are limited to chemical reactions not facilitated by living organisms. The reactions comprising photosynthesis in plants and other organisms, for example, are not considered photogeochemistry, since the physiochemical context for these reactions is installed by the organism, and must be maintained in order for these reactions to continue (i.e. the reactions cease if the organism dies). In contrast, if a certain compound is produced by an organism, and the organism dies but the compound remains, this compound may still participate independently in a photogeochemical reaction even though its origin is biological (e.g. biogenic mineral precipitates [ 6 ] [ 7 ] or organic compounds released from plants into water [ 8 ] ).
The study of photogeochemistry is primarily concerned with naturally occurring materials, but may extend to include other materials, inasmuch as they are representative of, or bear some relation to, those found on Earth. For example, many inorganic compounds have been synthesized in the laboratory to study photocatalytic reactions. Although these studies are usually not undertaken in the context of environmental or Earth sciences , the study of such reactions is relevant to photogeochemistry if there is a geochemical implication (i.e. similar reactants or reaction mechanisms occur naturally). Similarly, photogeochemistry may also include photochemical reactions of naturally occurring materials that are not touched by sunlight, if there is the possibility that these materials may become exposed (e.g. deep soil layers uncovered by mining).
Except for several isolated instances, [ 2 ] [ 9 ] [ 10 ] studies that fit the definition of photogeochemistry have not been explicitly specified as such, but have been traditionally categorized as photochemistry, especially at the time when photochemistry was an emerging field or new facets of photochemistry were being explored. Photogeochemical research, however, may be set apart in light of its specific context and implications, thereby bringing more exposure to this "poorly explored area of experimental geochemistry". [ 2 ] Past studies that fit the definition of photogeochemistry may be designated retroactively as such.
The first efforts that can be considered photogeochemical research can be traced to the "formaldehyde hypothesis" of Adolf von Baeyer in 1870, [ 11 ] in which formaldehyde was proposed to be the initial product of plant photosynthesis, formed from carbon dioxide and water through the action of light on a green leaf. This suggestion inspired numerous attempts to obtain formaldehyde in vitro , which can retroactively be considered photogeochemical studies. Detection of organic compounds such as formaldehyde and sugars was reported by many workers, usually by exposure of a solution of carbon dioxide to light, typically a mercury lamp or sunlight itself. At the same time, many other workers reported negative results. [ 12 ] [ 13 ] One of the pioneer experiments was that of Bach in 1893, [ 14 ] who observed the formation of lower uranium oxides upon irradiation of a solution of uranium acetate and carbon dioxide, implying the formation of formaldehyde. Some experiments included reducing agents such as hydrogen gas, [ 15 ] and others detected formaldeyhde or other products in the absence of any additives, [ 16 ] [ 17 ] although the possibility was admitted that reducing power may have been produced from the decomposition of water during the experiment. [ 16 ] In addition to the main focus on synthesis of formaldehyde and simple sugars, other light-assisted reactions were occasionally reported, such as the decomposition of formaldehyde and subsequent release of methane , or the formation of formamide from carbon monoxide and ammonia. [ 15 ]
In 1912 Benjamin Moore summarized the main facet of photogeochemistry, that of inorganic photocatalysis : "the inorganic colloid must possess the property of transforming sunlight, or some other form of radiant energy , into chemical energy." [ 18 ] Many experiments, still focused on how plants assimilate carbon, did indeed explore the effect of a "transformer" (catalyst); some effective "transformers" were similar to naturally occurring minerals, including iron(III) oxide or colloidal iron hydroxide; [ 17 ] [ 19 ] [ 20 ] cobalt carbonate, copper carbonate, nickel carbonate; [ 17 ] and iron(II) carbonate. [ 21 ] Working with an iron oxide catalyst, Baly [ 20 ] concluded in 1930 that "the analogy between the laboratory process and that in the living plant seems therefore to be complete," referring to his observation that in both cases, a photochemical reaction takes place on a surface, the activation energy is supplied in part by the surface and in part by light, efficiency decreases when the light intensity is too great, the optimal temperature of the reaction is similar to that of living plants, and efficiency increases from the blue to the red end of the light spectrum.
At this time, however, the intricate details of plant photosynthesis were still obscure, and the nature of photocatalysis in general was still actively being discovered; Mackinney in 1932 stated that "the status of this problem [photochemical CO 2 reduction] is extraordinarily involved." [ 13 ] As in many emerging fields, experiments were largely empirical, but the enthusiasm surrounding this early work did lead to significant advances in photochemistry. The simple but challenging principle of transforming solar energy into chemical energy capable of performing a desired reaction remains the basis of application-based photocatalysis, most notably artificial photosynthesis (production of solar fuels ).
After several decades of experiments centered around the reduction of carbon dioxide, interest began to spread to other light-induced reactions involving naturally occurring materials. These experiments usually focused on reactions analogous to known biological processes, such as soil nitrification , [ 22 ] for which the photochemical counterpart "photonitrification" was first reported in 1930. [ 23 ]
Photogeochemical reactions may be classified based on thermodynamics and/or the nature of the materials involved. In addition, when ambiguity exists regarding an analogous reaction involving light and living organisms ( phototrophy ), the term "photochemical" may be used to distinguish a particular abiotic reaction from the corresponding photobiological reaction. For example, "photooxidation of iron(II)" can refer to either a biological process driven by light (phototrophic or photobiological iron oxidation) [ 24 ] or a strictly chemical, abiotic process (photochemical iron oxidation). Similarly, an abiotic process that converts water to O 2 under the action of light may be designated "photochemical oxidation of water" rather than simply "photooxidation of water", in order to distinguish it from photobiological oxidation of water potentially occurring in the same environment (by algae, for example).
Photogeochemical reactions are described by the same principles used to describe photochemical reactions in general, and may be classified similarly:
Any reaction in the domain of photogeochemistry, either observed in the environment or studied in the laboratory, may be broadly classified according to the nature of the materials involved.
Direct photogeochemical catalysts act by absorbing light and subsequently transferring energy to reactants.
The majority of observed photogeochemical reactions involve a mineral catalyst. Many naturally occurring minerals are semiconductors that absorb some portion of solar radiation. [ 31 ] These semiconducting minerals are frequently transition metal oxides and sulfides and include abundant, well-known minerals such as hematite (Fe 2 O 3 ), magnetite (Fe 3 O 4 ), goethite and lepidocrocite (FeOOH), and pyrolusite (MnO 2 ). Radiation of energy equal to or greater than the band gap of a semiconductor is sufficient to excite an electron from the valence band to a higher energy level in the conduction band, leaving behind an electron hole (h + ); the resulting electron-hole pair is called an exciton . The excited electron and hole can reduce and oxidize, respectively, species having suitable redox potentials relative to the potentials of the valence and conduction bands. Semiconducting minerals with appropriate band gaps and appropriate band energy levels can catalyze a vast array of reactions, [ 32 ] most commonly at mineral-water or mineral-gas interfaces.
Organic compounds such as "bio-organic substances" [ 33 ] and humic substances [ 34 ] [ 35 ] are also able to absorb light and act as catalysts or sensitizers, accelerating photoreactions that normally occur slowly or facilitating reactions that might not normally occur at all.
Some materials, such as certain silicate minerals , absorb little or no solar radiation, but may still participate in light-driven reactions by mechanisms other than direct transfer of energy to reactants.
Indirect photocatalysis may occur via the production of a reactive species which then participates in another reaction. For example, photodegradation of certain compounds has been observed in the presence of kaolinite and montmorillonite, and this may proceed via the formation of reactive oxygen species at the surface of these clay minerals. [ 27 ] Indeed, reactive oxygen species have been observed when soil surfaces are exposed to sunlight. [ 26 ] [ 36 ] The ability of irradiated soil to generate singlet oxygen was found to be independent of the organic matter content, and both the mineral and organic components of soil appear to contribute to this process. [ 37 ] Indirect photolysis in soil has been observed to occur at depths of up to 2 mm due to migration of reactive species; in contrast, direct photolysis (in which the degraded compound itself absorbs light) was restricted to a "photic depth" of 0.2 to 0.4 mm. [ 38 ] Like certain minerals, organic matter in solution, [ 39 ] [ 40 ] as well as particulate organic matter, [ 41 ] may act as an indirect catalyst via formation of singlet oxygen which then reacts with other compounds.
Indirect catalysts may also act through surface sensitization of reactants, by which species sorbed to a surface become more susceptible to photodegradation. [ 42 ]
Strictly speaking, the term "catalysis" should not be used unless it can be shown that the number of product molecules produced per number of active sites is greater than one; this is difficult to do in practice, although it is often assumed to be true if there is no loss in the photoactivity of the catalyst for an extended period of time. [ 25 ] Reactions that are not strictly catalytic may be designated "assisted photoreactions". [ 25 ] Furthermore, phenomena that involve complex mixtures of compounds (e.g. soil) may be hard to classify unless complete reactions (not just individual reactants or products) can be identified.
The great majority of photogeochemical research is performed in the laboratory, as it is easier to demonstrate and observe a particular reaction under controlled conditions. This includes confirming the identity of materials, designing reaction vessels, controlling light sources, and adjusting the reaction atmosphere. However, observation of natural phenomena often provides initial inspiration for further study. For example, during the 1970s it was generally agreed that nitrous oxide (N 2 O) has a short residence time in the troposphere, although the actual explanation for its removal was unknown. Since N 2 O does not absorb light at wavelengths greater than 280 nm, direct photolysis had been discarded as a possible explanation. It was then observed that light would decompose chloromethanes when they were absorbed on silica sand, [ 42 ] and this occurred at wavelengths far above the absorption spectra for these compounds. The same phenomenon was observed for N 2 O, leading to the conclusion that particulate matter in the atmosphere is responsible for the destruction of N 2 O via surface-sensitized photolysis. [ 43 ] Indeed, the idea of such a sink for atmospheric N 2 O was supported by several reports of low concentrations of N 2 O in the air above deserts, where there is a high amount of suspended particulate matter. [ 44 ] As another example, the observation that the amount of nitrous acid in the atmosphere greatly increases during the day lead to insight into the surface photochemistry of humic acids and soils and an explanation for the original observation. [ 45 ]
The following table lists some reported reactions that are relevant to photogeochemical study, including reactions that involve only naturally occurring compounds as well as complementary reactions that involve synthetic but related compounds. The selection of reactions and references given is merely illustrative and may not exhaustively reflect current knowledge, especially in the case of popular reactions such as nitrogen photofixation for which there is a large body of literature. Furthermore, although these reactions have natural counterparts, the probability of encountering optimal reaction conditions may be low in some cases; for example, most experimental work concerning CO 2 photoreduction is intentionally performed in the absence of O 2 , since O 2 almost always suppresses the reduction of CO 2 . In natural systems, however, it is uncommon to find an analogous context where CO 2 and a catalyst are reached by light but there is no O 2 present.
NH 3 → NO − 3
CO 2 → HCOOH
CO 2 → CH 2 O
CO 2 → CH 3 OH
CO 2 → CH 4
2. CO 2 → C 2 H 4 , C 2 H 6
3. CO 2 → tartaric, glyoxylic, oxalic acids
3. ZnS [ 73 ]
CH 4 → CO 2
2. photocatalytic degradation
3. photochemical mineralization (CO and CO 2 as products)
photochemical oxidation of Fe(II)
photochemical oxidation of Fe(II)
ZnS → Zn 0 + SO 2− 4 (presence of air) | https://en.wikipedia.org/wiki/Photogeochemistry |
Photoglottography or photo-electric glottography is a laboratory technique for investigating the opening and closing of the glottis in the larynx . It detects variations in the amount of light that can pass through the glottis as it opens and closes. [ 1 ]
It was observed by Czermak in 1861 that the inside of the trachea could be illuminated from outside the neck, in what he called illumination by transparency , and the resulting light passing through the glottis observed with a laryngoscopic mirror . [ 2 ] Electronic techniques making use of this observation began to be used in the mid-twentieth century. [ 3 ] [ 4 ] Instruments such as that designed and manufactured by B. Frøkjaer-Jensen, [ 5 ] have used the combination of a light source illuminating the trachea from below, and a light-sensitive cell positioned above the glottis in the pharynx to detect light passing through the glottis. This cell is fixed near the end of a thin tube inserted through the nose and nasal passages (which leaves the articulators relatively free to move in speech); in the Frøkjaer-Jensen instrument the tube is extended so that a few centimetres can be swallowed into the oesophagus in order to anchor the light-sensitive cell securely in place. The light from a light source is carried to the neck by a tapered perspex rod pressed against the neck immediately below the thyroid cartilage; alternatively, a cold light source may be applied directly to the neck.
Two main areas have been explored with this technique.
A number of researchers have attempted to compare the photoglottograph output with measurements of glottal opening based on high-speed or stroboscopic film during phonation . If the two were closely similar, the photoglottograph would represent a quicker and cheaper method of analysis of phonation. However, Baken reports variable results: a study by Coleman and Wendahl concluded that "relating photoglottographic waveforms ... to glottal area is not only hazardous but invalid in many cases", [ 6 ] while a later study by Harden found that the photoglottograph provided "essentially the same information on glottal area function as that provided by ultrahigh-speed photography" [ 7 ]
In addition to the study of vocal fold vibratory patterns, the technique may be used to detect the opening of the glottis for voiceless consonants or the closure of the glottis for glottalic consonants and glottal stop . [ 8 ]
Photoglottography has been evaluated for usefulness in the study of dysphonic patients in the clinic. [ 9 ] The technique is thought to be useful in reflecting the phonatory effect of Parkinson's disease . [ 10 ] | https://en.wikipedia.org/wiki/Photoglottography |
Photografting is a technique used in the study of polymers and more in specific polymeric biomaterials . Technically speaking it is the covalent incorporation of functional additives to a polymer matrix or polymer surface using a light-induced mechanism. It is an important technique for the modification of biomaterial surfaces. For example, by graft with polar monomers, the inert polymer surface can become more biocompatible. [ 1 ]
This article about polymer science is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Photografting |
Photographic processing or photographic development is the chemical means by which photographic film or paper is treated after photographic exposure to produce a negative or positive image . Photographic processing transforms the latent image into a visible image, makes this permanent and renders it insensitive to light. [ 1 ]
All processes based upon the gelatin silver process are similar, regardless of the film or paper's manufacturer. Exceptional variations include instant films such as those made by Polaroid and thermally developed films. Kodachrome required Kodak 's proprietary K-14 process . Kodachrome film production ceased in 2009, and K-14 processing is no longer available as of December 30, 2010. [ 2 ] Ilfochrome materials use the dye destruction process. Deliberately using the wrong process for a film is known as cross processing .
All photographic processing use a series of chemical baths. Processing, especially the development stages, requires very close control of temperature, agitation and time.
The washing time can be reduced and the fixer more completely removed if a hypo clearing agent is used after the fixer.
Once the film is processed, it is then referred to as a negative .
The negative may now be printed ; the negative is placed in an enlarger and projected onto a sheet of photographic paper. Many different techniques can be used during the enlargement process. Two examples of enlargement techniques are dodging and burning .
Alternatively (or as well), the negative may be scanned for digital printing or web viewing after adjustment, retouching, and/or manipulation .
From a chemical standpoint, conventional black and white negative film is processed by a developer that reduces silver halide to silver metal, exposed silver halide is reduced faster than unexposed silver halide, which leaves a silver metal image. It is then fixed by converting all remaining silver halide into a soluble silver complex, which is then washed away with water. [ 6 ] An example of a black and white developer is Kodak D-76 which has bis(4-hydroxy-N-methylanilinium) sulfate with hydroquinone and sodium sulfite.
In graphic art film, also called lithographic film which is a special type of black and white film used for converting images into halftone images for offset printing, a developer containing methol-hydroquinone and sulfite stabilizers may be used. Exposed silver halide oxidizes the hydroquinone, which then oxidizes a nucleating agent in the film, which is attacked by a hydroxide ion and converts it via hydrolysis into a nucleating agent for silver metal, which it then forms on unexposed silver halide, creating a silver image. The film is then fixed by converting all remaining silver halide into soluble silver complexes. [ 6 ]
This process has three additional stages:
Chromogenic materials use dye couplers to form colour images.
Modern colour negative film is developed with the C-41 process and colour negative print materials with the RA-4 process . These processes are very similar, with differences in the first chemical developer.
The C-41 and RA-4 processes consist of the following steps:
In the RA-4 process, the bleach and fix are combined. This is optional, and reduces the number of processing steps. [ 12 ]
Transparency films, except Kodachrome , are developed using the E-6 process , which has the following stages:
The Kodachrome process is called K-14 . It is very involved, requiring 4 separate developers, one for black and white and 3 for color, reexposing the film in between development stages, 8 or more tanks of processing chemicals, each with precise concentration, temperature and agitation, resulting in very complex processing equipment with precise chemical control. [ 8 ]
In some old processes, the film emulsion was hardened during the process, typically before the bleach. Such a hardening bath often used aldehydes, such as formaldehyde and glutaraldehyde . In modern processing, these hardening steps are unnecessary because the film emulsion is sufficiently hardened to withstand the processing chemicals.
A typical chromogenic color film development process can be described from a chemical standpoint as follows: Exposed silver halide oxidizes the developer. [ 6 ] The oxidized developer then reacts with color couplers, [ 6 ] which are molecules near the exposed silver halide crystals, [ 6 ] to create color dyes [ 6 ] which ultimately create a negative image, after this the film is bleached, fixed, washed, stabilized and dried. The dye is only created where the couplers are. Thus the development chemical must travel a short distance from the exposed silver halide to the coupler and create a dye there. The amount of dye created is small and the reaction only occurs near the exposed silver halide [ 10 ] and thus doesn't spread throughout the entire layer. The developer diffuses into the film emulsion to react with its layers. [ 10 ] This process happens simultaneously for all three colors of couplers in the film: cyan (in the red-sensitive layer in the film), magenta(for the green-sensitive layer), and yellow (for the blue-sensitive layer). [ 6 ] Color film has these three layers, to be able to perform subtractive color mixing and be able to replicate colors in images.
Black and white emulsions both negative and positive, may be further processed. The image silver may be reacted with elements such as selenium or sulphur to increase image permanence and for aesthetic reasons. This process is known as toning .
In selenium toning, the image silver is changed to silver selenide ; in sepia toning , the image is converted to silver sulphide . These chemicals are more resistant to atmospheric oxidising agents than silver.
If colour negative film is processed in conventional black and white developer, and fixed and then bleached with a bath containing hydrochloric acid and potassium dichromate solution, the resultant film, once exposed to light, can be redeveloped in colour developer to produce an unusual pastel colour effect. [ citation needed ]
Before processing, the film must be removed from the camera and from its cassette , spool or holder in a light-proof room or container.
In amateur processing, the film is removed from the camera and wound onto a reel in complete darkness (usually inside a darkroom with the safelight turned off or a lightproof bag with arm holes). The reel holds the film in a spiral shape, with space between each successive loop so the chemicals may flow freely across the film's surfaces. The reel is placed in a specially designed light-proof tank (called a daylight processing tank or a light-trap tank) where it is retained until final washing is complete.
Sheet films can be processed in trays, in hangers (which are used in deep tanks), or rotary processing drums. Each sheet can be developed individually for special requirements. Stand development , long development in dilute developer without agitation, is occasionally used.
In commercial, central processing, the film is removed automatically or by an operator handling the film in a light proof bag from which it is fed into the processing machine. The processing machinery is generally run on a continuous basis with films spliced together in a continuous line. All the processing steps are carried out within a single processing machine with automatically controlled time, temperature and solution replenishment rate. The film or prints emerge washed and dry and ready to be cut by hand. Some modern machines also cut films and prints automatically, sometimes resulting in negatives cut across the middle of the frame where the space between frames is very thin or the frame edge is indistinct, as in an image taken in low light. Alternatively stores may use minilabs to develop films and make prints on the spot automatically without needing to send film to a remote, central facility for processing and printing.
Some processing chemistries used in minilabs require a minimum amount of processing per given amount of time to remain stable and usable. Once rendered unstable due to low use, the chemistry needs to be completely replaced, or replenishers can be added to restore the chemistry to a usable state. Some chemistries have been designed with this in mind given the declining demand for film processing in minilabs, often requiring specific handling. Often chemistries become damaged by oxidation. Also, development chemicals need to be thoroughly agitated constantly to ensure consistent results. The effectiveness (activity) of the chemistry is determined through pre-exposed film control strips. [ 13 ]
Many photographic solutions have high chemical and biological oxygen demand (COD and BOD). These chemical wastes are often treated with ozone , peroxide or aeration to reduce the COD in commercial laboratories.
Exhausted fixer and to some extent rinse water contain silver thiosulfate complex ions. They are far less toxic than free silver ion, and they become silver sulfide sludge in the sewer pipes or treatment plant. However, the maximum silver concentration in discharge is very often tightly regulated. Silver is also a somewhat precious resource. Therefore, in most large scale processing establishments, exhausted fixer is collected for silver recovery and disposal.
Many photographic chemicals use non-biodegradable compounds, such as EDTA , DTPA , NTA and borate . EDTA, DTPA, and NTA are very often used as chelating agents in all processing solutions, particularly in developers and washing aid solutions. EDTA and other polyamine polycarboxylic acids are used as iron ligands in colour bleach solutions. These are relatively nontoxic, and in particular EDTA is approved as a food additive. However, due to poor biodegradability , these chelating agents are found in alarmingly high concentrations in some water sources from which municipal tap water is taken. [ 14 ] [ 15 ] Water containing these chelating agents can leach metal from water treatment equipment as well as pipes. This is becoming an issue in Europe and some parts of the world. [ citation needed ]
Another non-biodegradable compound in common use is surfactant . A common wetting agent for even drying of processed film uses Union Carbide/Dow Triton X-100 or octylphenol ethoxylate. This surfactant is also found to have estrogenic effect and possibly other harms to organisms including mammals. [ citation needed ]
Development of more biodegradable alternatives to the EDTA and other bleaching agent constituents were sought by major manufacturers, until the industry became less profitable when the digital era began.
In most amateur darkrooms, a popular bleach is potassium ferricyanide . This compound decomposes in the waste water stream to liberate cyanide gas. [ citation needed ] Other popular bleach solutions use potassium dichromate (a hexavalent chromium ) or permanganate . Both ferricyanide and dichromate are tightly regulated for sewer disposal from commercial premises in some areas.
Borates , such as borax (sodium tetraborate), boric acid and sodium metaborate, are toxic to plants, even at a concentration of 100 ppm. Many film developers and fixers contain 1 to 20 g/L of these compounds at working strength. Most non-hardening fixers from major manufacturers are now borate-free, but many film developers still use borate as the buffering agent. Also, some, but not all, alkaline fixer formulae and products contain a large amount of borate. New products should phase out borates, because for most photographic purposes, except in acid hardening fixers, borates can be substituted with a suitable biodegradable compound.
Developing agents are commonly hydroxylated benzene compounds or aminated benzene compounds, and they are harmful to humans and experimental animals. Some are mutagens . They also have a large chemical oxygen demand (COD). Ascorbic acid and its isomers, and other similar sugar derived reductone reducing agents are a viable substitute for many developing agents. Developers using these compounds were actively patented in the US, Europe and Japan, until the 1990s but the number of such patents is very low since the late-1990s, when the digital era began.
Development chemicals may be recycled by up to 70% using an absorber resin, only requiring periodic chemical analysis on pH, density and bromide levels. Other developers need ion-exchange columns and chemical analysis, allowing for up to 80% of the developer to be reused. Some bleaches are claimed to be fully bio-degradable while others can be regenerated by adding bleach concentrate to overflow (waste). Used fixers can have 60 to 90% of their silver content removed through electrolysis, in a closed loop where the fixer is continually recycled (regenerated). Stabilizers may or may not contain formaldehyde . [ 16 ] | https://en.wikipedia.org/wiki/Photographic_laboratory |
Photographic magnitude ( m ph or m p ) is a measure of the relative brightness of a star or other astronomical object as imaged on a photographic film emulsion with a camera attached to a telescope . An object's apparent photographic magnitude depends on its intrinsic luminosity , its distance and any extinction of light by interstellar matter existing along the line of sight to the observer.
Photographic observations have now been superseded by electronic photometry such as CCD charge-coupled device cameras that convert the incoming light into an electric current by the photoelectric effect. Determination of magnitude is made using a photometer .
Prior to photographic methods to determine magnitude, the brightness of celestial objects was determined by visual photometric methods . This was simply achieved with the human eye by compared the brightness of an astronomical object with other nearby objects of known or fixed magnitude: especially regarding stars , planets and other planetary objects in the Solar System , variable stars [ 1 ] and deep-sky objects .
By the late 19th Century, an improved measure of the apparent magnitude of astronomical objects was obtained by photography, often attached as a dedicated plate camera at the prime focus of the telescope. Images were made on orthochromatic photoemulsive film or plates . These photographs were created by exposing the film over a short or long period of time, whose total exposure length accumulates photons and reveals fainter stars or astronomical objects invisible to the human eye . Although stars viewed in the sky are approximate point sources, the process in collecting their light cause each star to appear as small round disk, whose brightness is approximately proportional to the disk's diameter or its area. Simple measurement of the disk size can be optically judged by either a microscope or by an specially designed astronomical microdensitometer .
Early black and white photographic plates used silver halide emulsions that were more sensitive to the blue end of the visual spectrum . This caused bluer stars to have a brighter photographic magnitude against the equivalent visual magnitude : appearing brighter on the photograph than the human eye or modern electronic photometers. Conversely, redder stars appear dimmer, and have a fainter photographic magnitude than its visual magnitude. For example, the red supergiant star KW Sagittarii has the photographic magnitude range of 11.0p to 13.2p but in the visual magnitude of about 8.5p to 11.0p. It is also common for variable star charts to feature several blue magnitude (B) comparison stars. e.g. S Doradus and WZ Sagittae . [ clarification needed ]
Photographic photometric methods define magnitudes and colours of astronomical objects using astronomical photographic images as viewed through selected or standard coloured bandpass filters. This differs from other expressions of apparent visual magnitude [ 2 ] observed by the human eye or obtained by photography: [ 1 ] that usually appear in older astronomical texts and catalogues. Early photographic images initially employed inconsistent quality or unstable yellow coloured filters, though later filter systems adopted more standardised bandpass filters which are still used with today's CCD photometers.
Apparent photographic magnitude is usually given as m pg or m p , or photovisual magnitudes m p or m pv . [ 3 ] [ 1 ] Absolute photographic magnitude is M pg . [ 3 ] These are different from the commonplace photometric systems (UBV, UBVRI or JHK) that are expressed with a capital letter. e.g. 'V" (m V ), "B" (m B ), etc. Other visual magnitudes estimated by the human eye are expressed using lower case letters. e.g. "v" or "b", etc. [ 4 ] e.g. Visual magnitudes as m v . [ 3 ] Hence, a 6th magnitude star might be stated as 6.0V, 6.0B, 6.0v or 6.0p. Because starlight is measured over a different range of wavelengths across the electromagnetic spectrum and are affected by different instrumental photometric sensitivities to light, they are not necessarily equivalent in numerical value. [ 4 ] | https://en.wikipedia.org/wiki/Photographic_magnitude |
Photoheterotrophs ( Gk : photo = light, hetero = (an)other, troph = nourishment) are heterotrophic phototrophs —that is, they are organisms that use light for energy, but cannot use carbon dioxide as their sole carbon source. Consequently, they use organic compounds from the environment to satisfy their carbon requirements; these compounds include carbohydrates , fatty acids , and alcohols . Examples of photoheterotrophic organisms include purple non-sulfur bacteria , green non-sulfur bacteria , and heliobacteria . [ 1 ] These microorganisms are ubiquitous in aquatic habitats, occupy unique niche-spaces, and contribute to global biogeochemical cycling. Recent research has also indicated that the oriental hornet and some aphids may be able to use light to supplement their energy supply. [ 2 ] Some recent research has even found hints of photoheterotrophy in a few eukaryotes , though it’s still being studied.
Studies have shown that mammalian mitochondria can also capture light and synthesize ATP when mixed with pheophorbide , a light-capturing metabolite of chlorophyll. [ 3 ] Research demonstrated that the same metabolite when fed to the worm Caenorhabditis elegans leads to increase in ATP synthesis upon light exposure, along with an increase in life span. [ 4 ]
Furthermore, inoculation experiments suggest that mixotrophic Ochromonas danica (i.e., Golden algae)—and comparable eukaryotes—favor photoheterotrophy in oligotrophic (i.e., nutrient-limited) aquatic habitats. [ 5 ] This preference may increase energy-use efficiency and growth by reducing investment in inorganic carbon fixation (e.g., production of autotrophic machineries such as RuBisCo and PSII).
Photoheterotrophs get energy from light and carbon from organic substances like carbohydrates , fatty acids , or alcohols .
They’re different from photoautotrophs, which use carbon dioxide for carbon, and from chemoheterotrophs , which get both energy and carbon from organic compounds. Photoheterotrophy tends to be useful in places where light is available but carbon dioxide is in short supply—like some parts of the ocean or shallow water environments.
Photoheterotrophs generate ATP using light, in one of two ways: [ 6 ] [ 7 ] they use a bacteriochlorophyll -based reaction center, or they use a bacteriorhodopsin . The chlorophyll -based mechanism is similar to that used in photosynthesis, where light excites the molecules in a reaction center and causes a flow of electrons through an electron transport chain (ETS). This flow of electrons through the proteins causes hydrogen ions to be pumped across a membrane. The energy stored in this proton gradient is used to drive ATP synthesis. Unlike in photoautotrophs , the electrons flow only in a cyclic pathway: electrons released from the reaction center flow through the ETS and return to the reaction center. They are not utilized to reduce any organic compounds. Purple non-sulfur bacteria , green non-sulfur bacteria , and heliobacteria are examples of bacteria that carry out this scheme of photoheterotrophy.
Other organisms, including halobacteria and flavobacteria [ 8 ] and vibrios [ 9 ] have purple-rhodopsin-based proton pumps that supplement their energy supply. The archaeal version is called bacteriorhodopsin , while the eubacterial version is called proteorhodopsin . The pump consists of a single protein bound to a Vitamin A derivative, retinal . The pump may have accessory pigments (e.g., carotenoids ) associated with the protein. When light is absorbed by the retinal molecule, the molecule isomerises. This drives the protein to change shape and pump a proton across the membrane. The hydrogen ion gradient can then be used to generate ATP, transport solutes across the membrane, or drive a flagellar motor . One particular flavobacterium cannot reduce carbon dioxide using light, but uses the energy from its rhodopsin system to fix carbon dioxide through anaplerotic fixation. [ 8 ] The flavobacterium is still a heterotroph as it needs reduced carbon compounds to live and cannot subsist on only light and CO 2 . It cannot carry out reactions in the form of
where H 2 D may be water, H 2 S or another compound/compounds providing the reducing electrons and protons; the 2D + H 2 O pair represents an oxidized form.
However, it can fix carbon in reactions like:
where malate or other useful molecules are otherwise obtained by breaking down other compounds by
This method of carbon fixation is useful when reduced carbon compounds are scarce and cannot be wasted as CO 2 during interconversions, but energy is plentiful in the form of sunlight.
Organisms that are known to be photoheterotrophic include:
Some other organisms—though not true photoheterotrophs—have interesting features that might be similar. For example, the Oriental hornet can absorb light with pigments in its body and may use that light for energy. Certain aphids have also been shown to make light-sensitive carotenoids that could help them get energy from sunlight. A few recent studies even suggest that yeast cells can be modified to respond to light by inserting genes that allow them to use rhodopsin .
[ 10 ] [ 11 ]
Photoheterotrophs are found in many different water-based environments like oceans, lakes, and even rice paddies. They tend to live near the surface of the water, where there’s enough light but not much carbon dioxide.
Photoheterotrophs—either 1) cyanobacteria (i.e. facultative heterotrophs in nutrient-limited environments like Synechococcus and Prochlorococcus), 2) aerobic anoxygenic photoheterotrophic bacteria (AAP; employing bacteriochlorophyll-based reaction centers), 3) proteorhodopsin (PR)-containing bacteria and archaea, and 4) heliobacteria (i.e., the only phototroph with bacteriochlorophyll g pigments, or Gram-positive membrane) are found in various aquatic habitats including oceans, stratified lakes, rice fields, and environmental extremes. [ 12 ] [ 13 ] [ 14 ] [ 15 ]
In oceans' photic zones, up to 10% of bacterial cells are capable of AAP, whereas greater than 50% of net marine microorganisms house PR—reaching up to 90% in coastal biomes. [ 16 ] As demonstrated in inoculation experiments, photoheterotrophy may provide these planktonic microbes competitive advantages 1) relative to chemoheterotrophs in oligotrophic (i.e., nutrient-poor) environments via increased nutrient use-efficiency (i.e., organic carbon fuels biosynthesis, excessively, versus energy production) and 2) by eliminating investment in physiologically costly autotrophic enzymes/complexes (RuBisCo and PSII). [ 17 ] [ 18 ] Furthermore, in Arctic oceans, AAP and PR photoheterotrophs are prominent in ice-covered regions during wintertime per light scarcity. [ 19 ] Lastly, seasonal turnover has been observed in marine AAPs as ecotypes (i.e., genetically similar taxa with differing functional trait and/or environmental preferences) segregate into temporal niches. [ 20 ]
In stratified (i.e., euxinic) lakes, photoheterotrophs—alongside other anoxygenic phototrophs (e.g., purple/green sulfur bacteria fixing carbon dioxide via electron donors such as ferrous iron, sulfide, and hydrogen gas)—often occupy the chemocline in the water column and/or sediments. [ 21 ] In this zone, dissolved oxygen is reduced, light is limited to long wavelengths (e.g., red and infrared) left-over by oxygenic phototrophs (e.g., cyanobacteria), and anaerobic metabolisms (i.e., those occurring in the absence of oxygen) begin introducing sulfide and bioavailable nutrients (e.g., organic carbon, phosphate, and ammonia) through upward diffusion. [ 22 ]
Heliobacteria are obligate anaerobes primarily located in rice fields, where low sulfide concentrations prevent competitive exclusion of purple/green sulfur bacteria. [ 23 ] These waterlogged environments may facilitate symbiotic relationships between heliobacteria and rice plants as fixed nitrogen—from the former—is exchanged for carbon-rich root exudates.
Observation studies have characterized photoheterotrophs (e.g., Green non-sulfur bacteria such as Chloroflexi and AAPs) within photosynthetic mats at environmental extremes (e.g., hot springs and hypersaline lagoons). [ 14 ] [ 24 ] Notably, temperature and pH drive anoxygenic phototroph community composition in Yellowstone National Park 's geothermal features. [ 14 ] In addition, various, light-dependent niches in the Great Salt Lake 's hypersaline mats support phototrophic diversity as microbes optimize energy production and combat osmotic stress. [ 24 ]
Photoheterotrophs influence global carbon cycling by assimilating dissolved organic carbon (DOC). [ 25 ] [ 22 ] Therefore, when harvesting light-energy, carbon is maintained in the microbial loop without corresponding respiration (i.e., carbon dioxide release to the atmosphere as DOC is oxidized to fuel energy production). This disconnect, the discovery of facultative photoheterotrophs (e.g., AAPs with flexible energy sources), and previous measurements taken in the dark (i.e., to avoid skewed oxygen consumption values due to photooxidation , UV light, and oxygenic photosynthesis) lead to overestimated aquatic CO 2 emissions. For example, a 15.2% decrease in community respiration was observed in Cep Lake, Czechia—alongside preferential glucose and pyruvate uptake—is attributed to facultative photoheterotrophs preferring light-energy during the daytime, given fitness benefits mentioned previously. [ 25 ]
"Microbiology Online" (textbook). University of Wisconsin, Madison. [ dead link ] | https://en.wikipedia.org/wiki/Photoheterotroph |
In photochemistry , photohydrogen is hydrogen produced with the help of artificial or natural light . This is how the leaf of a tree splits water molecules into protons (hydrogen ions), electrons (to make carbohydrates ) and oxygen (released into the air as a waste product). [ 1 ] Photohydrogen may also be produced by the photodissociation of water by ultraviolet light.
Photohydrogen is sometimes discussed in the context of obtaining renewable energy from sunlight , by using microscopic organisms such as bacteria or algae . These organisms create hydrogen with the help of hydrogenase enzymes which convert protons derived from the water splitting reaction into hydrogen gas which can then be collected and used as a biofuel . [ 2 ] [ 3 ]
This chemistry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Photohydrogen |
Photoinduced charge separation is the process of an electron in an atom or molecule, being excited to a higher energy level by the absorption of a photon and then leaving the atom or molecule to free space, or to a nearby electron acceptor . [ 1 ]
An atom consists of a positively-charged nucleus surrounded by bound electrons. The nucleus consists of uncharged neutrons and positively charged protons. Electrons are negatively charged. In the early part of the twentieth century Ernest Rutherford suggested that the electrons orbited the dense central nucleus in a manner analogous to planets orbiting the Sun. The centripetal force required to keep the electrons in orbit was provided by the Coulomb force of the protons in the nucleus acting upon the electrons; just like the gravitational force of the Sun acting on a planet provides the centripetal force necessary to keep the planet in orbit.
This model, although appealing, doesn't hold true in the real world. Synchrotron radiation would cause the orbiting electron to lose orbital energy and spiral inward since the vector quantity of acceleration of the particle multiplied by its mass (the value of the force required to keep the electron in circular motion) would be less than the electrical force the proton applied to the electron.
Once the electron spiralled into the nucleus the electron would combine with a proton to form a neutron, and the atom would cease to exist. This model is clearly wrong.
In 1913, Niels Bohr refined the Rutherford model by stating that the electrons existed in discrete quantized states called energy levels . This meant that the electrons could only occupy orbits at certain energies. The laws of quantum physics apply here, and they don't comply with the laws of classical newtonian mechanics .
An electron which is stationary and completely free from the atom has an energy of 0 joules (or 0 electronvolts). An electron which is described as being at the "ground state" has a (negative) energy which is equal to the ionization energy of the atom. The electron will reside in this energy level under normal circumstances, unless the ground state is full, in which case additional electrons will reside in higher energy states.
If a photon of light hits the atom it will be absorbed if, and only if, energy of that photon is equal to the difference between the ground state and another energy level in that atom. This raises the electron to a higher energy level.
If a photon of light hitting the atom has energy greater than the ionization energy, it will be absorbed and the electron absorbing the energy will be ejected from the atom with an energy equal to the photon energy minus the ionization energy.
This particle physics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Photoinduced_charge_separation |
Photoinduced electron transfer ( PET ) is an excited state electron transfer process by which an excited electron is transferred from donor to acceptor. [ 1 ] [ 2 ] Due to PET a charge separation is generated, i.e. , redox reaction takes place in excited state (this phenomenon is not observed in Dexter electron transfer ).
Such materials include semiconductors that can be photoactivated like many solar cells , biological systems such as those used in photosynthesis , and small molecules with suitable absorptions and redox states.
It is common to describe where electrons reside as electron bands in bulk materials and electron orbitals in molecules . For the sake of expedience the following description will be described in molecular terms. When a photon excites a molecule, an electron in a ground state orbital can be excited to a higher energy orbital. This excited state leaves a vacancy in a ground state orbital that can be filled by an electron donor . It produces an electron in a high energy orbital which can be donated to an electron acceptor . In these respects a photoexcited molecule can act as a good oxidizing agent or a good reducing agent .
The end result of both reactions is that an electron is delivered to an orbital that is higher in energy than where it previously resided. This is often described as a charge separated electron-hole pair when working with semiconductors .
In the absence of a proper electron donor or acceptor it is possible for such molecules to undergo ordinary fluorescence emission . The electron transfer is one form of photoquenching .
In many photo-productive systems this charge separation is kinetically isolated by delivery of the electron to a lower energy conductor attached to the p/n junction or into an electron transport chain . In this case some of the energy can be captured to do work. If the electron is not kinetically isolated thermodynamics will take over and the products will react with each other to regenerate the ground state starting material. This process is called recombination and the photon's energy is released as heat.
The reverse process to photoinduced electron transfer is displayed by light emitting diodes (LED) and chemiluminescence , where potential gradients are used to create excited states that decay by light emission. | https://en.wikipedia.org/wiki/Photoinduced_electron_transfer |
Photoinduced phase transition is a technique used in solid-state physics . It is a process to the nonequilibrium phases generated from an equilibrium by shining on high energy photons , and the nonequilibrium phase is a macroscopic excited domain that has new structural and electronic orders quite different from the starting ground state (equilibrium phase). [ 1 ] [ 2 ]
This condensed matter physics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Photoinduced_phase_transitions |
Photoinhibition is light-induced reduction in the photosynthetic capacity of a plant , alga , or cyanobacterium . Photosystem II (PSII) is more sensitive to light than the rest of the photosynthetic machinery, and most researchers define the term as light-induced damage to PSII. In living organisms, photoinhibited PSII centres are continuously repaired via degradation and synthesis of the D1 protein of the photosynthetic reaction center of PSII. Photoinhibition is also used in a wider sense, as dynamic photoinhibition, to describe all reactions that decrease the efficiency of photosynthesis when plants are exposed to light.
The first measurements of photoinhibition were published in 1956 by Bessel Kok. [ 1 ] Even in the very first studies, it was obvious that plants have a repair mechanism that continuously repairs photoinhibitory damage. In 1966, Jones and Kok measured the action spectrum of photoinhibition and found that ultraviolet light is highly photoinhibitory. [ 2 ] The visible-light part of the action spectrum was found to have a peak in the red-light region, suggesting that chlorophylls act as photoreceptors of photoinhibition. In the 1980s, photoinhibition became a popular topic in photosynthesis research, and the concept of a damaging reaction counteracted by a repair process was re-invented. Research was stimulated by a paper by Kyle, Ohad and Arntzen in 1984, showing that photoinhibition is accompanied by selective loss of a 32-kDa protein, later identified as the PSII reaction center protein D1. [ 3 ] The photosensitivity of PSII from which the oxygen evolving complex had been inactivated with chemical treatment was studied in the 1980s and early 1990s. [ 4 ] [ 5 ] A paper by Imre Vass and colleagues in 1992 described the acceptor-side mechanism of photoinhibition. [ 6 ] Measurements of production of singlet oxygen by photoinhibited PSII provided further evidence for an acceptor-side-type mechanism. [ 7 ] The concept of a repair cycle that continuously repairs photoinhibitory damage, evolved and was reviewed by Aro et al. in 1993. [ 8 ] Many details of the repair cycle, including the finding that the FtsH protease plays an important role in the degradation of the D1 protein, have been discovered since. [ 9 ] In 1996, a paper by Tyystjärvi and Aro showed that the rate constant of photoinhibition is directly proportional to light intensity, a result that opposed the former assumption that photoinhibition is caused by the fraction of light energy that exceeds the maximum capability of photosynthesis. [ 10 ] The following year, laser pulse photoinhibition experiments done by Itzhak Ohad's group led to the suggestion that charge recombination reactions may be damaging because they can lead to production of singlet oxygen. [ 11 ] The molecular mechanism(s) of photoinhibition are constantly under discussion. The newest candidate is the manganese mechanism suggested 2005 by the group of Esa Tyystjärvi. [ 12 ] A similar mechanism was suggested by the group of Norio Murata, also in 2005. [ 13 ]
Photoinhibition occurs in all organisms capable of oxygenic photosynthesis, from vascular plants to cyanobacteria . [ 14 ] [ 15 ] In both plants and cyanobacteria, blue light causes photoinhibition more efficiently than other wavelengths of visible light, and all wavelengths of ultraviolet light are more efficient than wavelengths of visible light. [ 14 ] Photoinhibition is a series of reactions that inhibit different activities of PSII, but there is no consensus on what these steps are. The activity of the oxygen-evolving complex of PSII is often found to be lost before the rest of the reaction centre loses activity. [ 12 ] [ 13 ] [ 16 ] [ 17 ] However, inhibition of PSII membranes under anaerobic conditions leads primarily to inhibition of electron transfer on the acceptor side of PSII. [ 6 ] Ultraviolet light causes inhibition of the oxygen-evolving complex before the rest of PSII becomes inhibited. Photosystem I (PSI) is less susceptible to light-induced damage than PSII, but slow inhibition of this photosystem has been observed. [ 18 ] Photoinhibition of PSI occurs in chilling-sensitive plants and the reaction depends on electron flow from PSII to PSI.
Photosystem II is damaged by light irrespective of light intensity. [ 16 ] The quantum yield of the damaging reaction in typical leaves of higher plants exposed to visible light, as well as in isolated thylakoid membrane preparations, is in the range of 10 −8 to 10 −7 and independent of the intensity of light. [ 10 ] [ 19 ] This means that one PSII complex is damaged for every 10-100 million photons that are intercepted. Therefore, photoinhibition occurs at all light intensities and the rate constant of photoinhibition is directly proportional to light intensity. Some measurements suggest that dim light causes damage more efficiently than strong light. [ 11 ]
The mechanism(s) of photoinhibition are under debate, several mechanisms have been suggested. [ 16 ] Reactive oxygen species , especially singlet oxygen, have a role in the acceptor-side, singlet oxygen and low-light mechanisms. In the manganese mechanism and the donor side mechanism, reactive oxygen species do not play a direct role. Photoinhibited PSII produces singlet oxygen, [ 7 ] and reactive oxygen species inhibit the repair cycle of PSII by inhibiting protein synthesis in the chloroplast. [ 20 ]
Strong light causes the reduction of the plastoquinone pool, which leads to protonation and double reduction (and double protonation) of the Q A electron acceptor of Photosystem II. The protonated and double-reduced forms of Q A do not function in electron transport. Furthermore, charge recombination reactions in inhibited Photosystem II are expected to lead to the triplet state of the primary donor (P 680 ) more probably than same reactions in active PSII. Triplet P 680 may react with oxygen to produce harmful singlet oxygen. [ 6 ]
If the oxygen-evolving complex is chemically inactivated, then the remaining electron transfer activity of PSII becomes very sensitive to light. [ 4 ] [ 19 ] It has been suggested that even in a healthy leaf, the oxygen-evolving complex does not always function in all PSII centers, and those ones are prone to rapid irreversible photoinhibition. [ 21 ]
A photon absorbed by the manganese ions of the oxygen-evolving complex triggers inactivation of the oxygen-evolving complex. Further inhibition of the remaining electron transport reactions occurs like in the donor-side mechanism. The mechanism is supported by the action spectrum of photoinhibition. [ 12 ]
Inhibition of PSII is caused by singlet oxygen produced either by weakly coupled chlorophyll molecules [ 22 ] or by cytochromes or iron–sulfur centers. [ 23 ]
Charge recombination reactions of PSII cause the production of triplet P 680 and, as a consequence, singlet oxygen. Charge recombination is more probable under dim light than under higher light intensities. [ 11 ]
Photoinhibition follows simple first-order kinetics if measured from a lincomycin -treated leaf, cyanobacterial or algal cells, or isolated thylakoid membranes in which concurrent repair does not disturb the kinetics. Data from the group of W. S. Chow indicate that in leaves of pepper ( Capsicum annuum ), the first-order pattern is replaced by a pseudo-equilibrium even if the repair reaction is blocked. The deviation has been explained by assuming that photoinhibited PSII centers protect the remaining active ones. [ 24 ] Both visible and ultraviolet light cause photoinhibition, ultraviolet wavelengths being much more damaging. [ 12 ] [ 23 ] [ 25 ] Some researchers consider ultraviolet and visible light induced photoinhibition as a two different reactions, [ 26 ] while others stress the similarities between the inhibition reactions occurring under different wavelength ranges. [ 12 ] [ 13 ]
Photoinhibition occurs continuously when plants or cyanobacteria are exposed to light, and the photosynthesizing organism must, therefore, continuously repair the damage. [ 8 ] The PSII repair cycle, occurring in chloroplasts and in cyanobacteria, consists of degradation and synthesis of the D1 protein of the PSII reaction centre, followed by activation of the reaction center. Due to the rapid repair, most PSII reaction centers are not photoinhibited even if a plant is grown in strong light. However, environmental stresses, for example, extreme temperatures, salinity , and drought, limit the supply of carbon dioxide for use in carbon fixation , which decreases the rate of repair of PSII. [ 27 ]
In photoinhibition studies, repair is often stopped by applying an antibiotic (lincomycin or chloramphenicol ) to plants or cyanobacteria, which blocks protein synthesis in the chloroplast . Protein synthesis occurs only in an intact sample, so lincomycin is not needed when photoinhibition is measured from isolated membranes. [ 27 ] The repair cycle of PSII recirculates other subunits of PSII (except for the D1 protein) from the inhibited unit to the repaired one.
Plants have mechanisms that protect against adverse effects of strong light. The most studied biochemical protective mechanism is non-photochemical quenching of excitation energy. [ 28 ] Visible-light-induced photoinhibition is ~25% faster in an Arabidopsis thaliana mutant lacking non-photochemical quenching than in the wild type . It is also apparent that turning or folding of leaves, as occurs, e.g., in Oxalis species in response to exposure to high light, protects against photoinhibition.
Because there are a limited number of photosystems in the electron transport chain , organisms that are photosynthetic must find a way to combat excess light and prevent photo-oxidative stress, and likewise, photoinhibition, at all costs. In an effort to avoid damage to the D1 subunit of PSII and subsequent formation of ROS , the plant cell employs accessory proteins to carry the excess excitation energy from incoming sunlight; namely, the PsBs protein. Elicited by a relatively low luminal pH, plants have developed a rapid response to excess energy by which it is given off as heat and damage is reduced.
The studies of Tibiletti et al. (2016) found that PsBs is the main protein involved in sensing the changes in the pH and can therefore rapidly accumulate in the presence of high light. This was determined by performing SDS-PAGE and immunoblot assays , locating PsBs itself in the green alga, Chlamydomonas reinhardtii . Their data concluded that the PsBs protein belongs to a multigene family termed LhcSR proteins, including the proteins that catalyze the conversion of violaxanthin to zeaxanthin , as previously mentioned. PsBs is involved in the changing the orientation of the photosystems at times of high light to prompt the arrangement of a quenching site in the light harvesting complex.
Additionally, studies conducted by Glowacka et al. (2018) show that a higher concentration of PsBs is directly correlated to inhibiting stomatal aperture . But it does this without affecting CO 2 intake and it increases water use efficiency of the plant. This was determined by controlling the expression of PsBs in Nicotinana tabacum by imposing a series of genetic modifications to the plant in order to test for PsBs levels and activity including: DNA transformation and transcription followed by protein expression. Research shows that stomatal conductance is heavily dependent on the presence of the PsBs protein. Thus, when PsBs was overexpressed in a plant, water uptake efficiency was seen to significantly improve, resulting in new methods for prompting higher, more productive crop yields.
These recent discoveries tie together two of the largest mechanisms in phytobiology; these are the influences that the light reactions have upon stomatal aperture via the Calvin Benson Cycle . To elaborate, the Calvin-Benson Cycle, occurring in the stroma of the chloroplast obtains its CO 2 from the atmosphere which enters upon stomatal opening. The energy to drive the Calvin-Benson cycle is a product of the light reactions. Thus, the relationship has been discovered as such: when PsBs is silenced, as expected, the excitation pressure at PSII is increased. This in turn results in an activation of the redox state of Quinone A and there is no change in the concentration of carbon dioxide in the intracellular airspaces of the leaf; ultimately increasing stomatal conductance . The inverse relationship also holds true: when PsBs is over expressed, there is a decreased excitation pressure at PSII. Thus, the redox state of Quinone A is no longer active and there is, again, no change in the concentration of carbon dioxide in the intracellular airspaces of the leaf. All these factors work to have a net decrease of stomatal conductance.
Photoinhibition can be measured from isolated thylakoid membranes or their subfractions, or from intact cyanobacterial cells by measuring the light-saturated rate of oxygen evolution in the presence of an artificial electron acceptor ( quinones and dichlorophenol-indophenol have been used).
The degree of photoinhibition in intact leaves can be measured using a fluorimeter to measure the ratio of variable to maximum value of chlorophyll a fluorescence (F V /F M ). [ 16 ] This ratio can be used as a proxy of photoinhibition because more energy is emitted as fluorescence from Chlorophyll a when many excited electrons from PSII are not captured by the acceptor and decay back to their ground state.
When measuring F V /F M , the leaf must be incubated in the dark for at least 10 minutes, preferably longer, before the measurement, in order to let non-photochemical quenching relax.
Photoinhibition can also be induced with short flashes of light using either a pulsed laser or a xenon flash lamp . When very short flashes are used, the photoinhibitory efficiency of the flashes depends on the time difference between the flashes. [ 11 ] This dependence has been interpreted to indicate that the flashes cause photoinhibition by inducing recombination reactions in PSII, with subsequent production of singlet oxygen. The interpretation has been criticized by noting that the photoinhibitory efficiency of xenon flashes depends on the energy of the flashes even if such strong flashes are used that they would saturate the formation of the substrate of the recombination reactions. [ 12 ]
Some researchers prefer to define the term “photoinhibition” so that it contains all reactions that lower the quantum yield of photosynthesis when a plant is exposed to light. [ 29 ] [ 30 ] In this case, the term "dynamic photoinhibition" comprises phenomena that reversibly down-regulate photosynthesis in the light and the term "photodamage" or "irreversible photoinhibition" covers the concept of photoinhibition used by other researchers. The main mechanism of dynamic photoinhibition is non-photochemical quenching of excitation energy absorbed by PSII. Dynamic photoinhibition is acclimation to strong light rather than light-induced damage, and therefore "dynamic photoinhibition" may actually protect the plant against "photoinhibition".
Photoinhibition may cause coral bleaching . [ 27 ] | https://en.wikipedia.org/wiki/Photoinhibition |
In chemistry , a photoinitiator is a molecule that creates reactive species ( free radicals , cations or anions ) when exposed to radiation ( UV or visible ). Synthetic photoinitiators are key components in photopolymers (for example, photo-curable coatings, adhesives and dental restoratives).
Some small molecules in the atmosphere can also act as photoinitiators by decomposing to give free radicals (in photochemical smog ). For instance, nitrogen dioxide ( NO 2 ) is produced in large quantities by gasoline -burning internal combustion engines . NO 2 in the troposphere gives smog its brown coloration and catalyzes production of toxic ground-level ozone ( O 3 ). Molecular oxygen ( O 2 ) also serves as a photoinitiator in the stratosphere , breaking down into atomic oxygen and combining with O 2 in order to form the ozone in the ozone layer .
Photoinitators can create reactive species by different pathways including photodissociation and electron transfer . As an example of dissociation, hydrogen peroxide can undergo homolytic cleavage, with the O−O bond cleaving to form two hydroxyl radicals .
Certain azo compounds (such as azobisisobutyronitrile ), can also photolytically cleave, forming two alkyl radicals and nitrogen gas:
These free radicals can now promote other reactions.
Since molecular oxygen can abstract H atoms from certain radicals, the HOO· radical is easily created. This particular radical can further abstract H atoms, creating H 2 O 2 , or hydrogen peroxide; peroxides can further cleave photolytically into two hydroxyl radicals. More commonly, HOO can react with free oxygen atoms to yield a hydroxyl radical (·OH) and oxygen gas. In both cases, the ·OH radicals formed can serve to oxidize organic compounds in the atmosphere. [ 1 ]
Nitrogen dioxide can also be photolytically cleaved by photons of wavelength less than 400 nm [ 2 ] producing atomic oxygen and nitric oxide .
Atomic oxygen is a highly reactive species, and can abstract a H atom from anything, including water.
Nitrogen dioxide can be regenerated through a reaction between certain peroxy-containing radicals and NO.
In the stratosphere, molecular oxygen (O 2 ) is an important photoinitiator that begins the ozone -production process in the ozone layer . Oxygen can be photolyzed into atomic oxygen by light with wavelength less than 240 nm. [ 3 ]
Atomic oxygen can then combine with more molecular oxygen to form ozone.
However, ozone can also be photolyzed back into O and O 2 .
Furthermore, atomic oxygen and ozone can combine into O 2 .
This set of reactions govern the production of ozone and can be combined to calculate its equilibrium concentration.
Azobisisobutyronitrile is a white powder often used as a photoinitiator for vinyl-based polymers such as polyvinyl chloride , also known as PVC. Because this particular photoinitiator produces nitrogen gas (N 2 ) upon decomposition, it is often used as a blowing agent to change the shape and/or texture of plastics.
Benzoyl peroxide, much like azobisisobutyronitrile, is a white powder used as a photoinitiator in various commercial and industrial processes, including plastics production. Unlike AIBN, however, benzoyl peroxide produces oxygen gas upon decomposing, giving this compound a host of medical uses as well. [ 4 ]
Upon contact with the skin, benzoyl peroxide breaks down, producing oxygen gas, among other things. The oxygen gas is absorbed into the pores of the skin, where it kills off the acne-causing bacterium Cutibacterium acnes .
In addition, the free radicals produced can break down dead skin cells. Clearing out these dead cells prevents pore blockage and, by extension, acne breakouts. [ 5 ]
Camphorquinone (CQ) is a photosensitiser used with an amine system, that generates primary radicals with light irradiation. These free electron then attack the double bonds of resin monomers resulting in polymerization. The physical properties of the cured resins are affected by the generation of primary radicals during the initial stage of polymerization.
Irgacure 819 (BAPO Bis(2,4,6-trimethylbenzoyl)-phenylphosphineoxide) is a Norrish type photoinitiator used in polymerization processes like two-photon polymerization . [ 7 ] When exposed to light it forms four radicals (2, 3, 5) per decomposed molecule (1), making it highly efficient in initiating polymerization. The second set of radicals forms through abstraction or chain transfer, further driving the reaction. [ 8 ] | https://en.wikipedia.org/wiki/Photoinitiator |
Photoionisation cross section in the context of condensed matter physics refers to the probability of a particle (usually an electron ) being emitted from its electronic state .
The photoemission is a useful experimental method for the determination and the study of the electronic states. Sometimes the small amount of deposited material over a surface has a weak contribution to the photoemission spectra , which makes its identification very difficult.
The knowledge of the cross section of a material can help to detect thin layers or 1D nanowires over a substrate . A right choice of the photon energy can enhance a small amount of material deposited over a surface, otherwise the display of the different spectra won't be possible. [ 1 ]
This condensed matter physics -related article is a stub . You can help Wikipedia by expanding it .
This electromagnetism -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Photoionisation_cross_section |
Photoionization is the physical process in which an ion is formed from the interaction of a photon with an atom or molecule . [ 2 ]
Not every interaction between a photon and an atom, or molecule, will result in photoionization. The probability of photoionization is related to the photoionization cross section of the species – the probability of an ionization event conceptualized as a hypothetical cross-sectional area. This cross section depends on the energy of the photon (proportional to its wavenumber) and the species being considered i.e. it depends on the structure of the molecular species. In the case of molecules, the photoionization cross-section can be estimated by examination of Franck-Condon factors between a ground-state molecule and the target ion. This can be initialized by computing the vibrations of a molecule and associated cation (post ionization) using quantum chemical software e.g. QChem. For photon energies below the ionization threshold, the photoionization cross-section is near zero. But with the development of pulsed lasers it has become possible to create extremely intense, coherent light where multi-photon ionization may occur via sequences of excitations and relaxations. At even higher intensities (around 10 15 – 10 16 W/cm 2 of infrared or visible light), non-perturbative phenomena such as barrier suppression ionization [ 3 ] and rescattering ionization [ 4 ] are observed.
Several photons of energy below the ionization threshold may actually combine their energies to ionize an atom. This probability decreases rapidly with the number of photons required, but the development of very intense, pulsed lasers still makes it possible. In the perturbative regime (below about 10 14 W/cm 2 at optical frequencies), the probability of absorbing N photons depends on the laser-light intensity I as I N . [ 5 ] For higher intensities, this dependence becomes invalid due to the then occurring AC Stark effect . [ 6 ]
Resonance-enhanced multiphoton ionization (REMPI) is a technique applied to the spectroscopy of atoms and small molecules in which a tunable laser can be used to access an excited intermediate state . [ citation needed ]
Above-threshold ionization (ATI) [ 7 ] is an extension of multi-photon ionization where even more photons are absorbed than actually would be necessary to ionize the atom. The excess energy gives the released electron higher kinetic energy than the usual case of just-above threshold ionization. More precisely, the system will have multiple peaks in its photoelectron spectrum which are separated by the photon energies, indicating that the emitted electron has more kinetic energy than in the normal (lowest possible number of photons) ionization case. The electrons released from the target will have approximately an integer number of photon-energies more kinetic energy. [ citation needed ]
When either the laser intensity is further increased or a longer wavelength is applied as compared with the regime in which multi-photon ionization takes place, a quasi-stationary approach can be used and results in the distortion of the atomic potential in such a way that only a relatively low and narrow barrier between a bound state and the continuum states remains. Then, the electron can tunnel through or for larger distortions even overcome this barrier. These phenomena are called tunnel ionization and over-the-barrier ionization , respectively. [ citation needed ] | https://en.wikipedia.org/wiki/Photoionization |
A photoionization detector or PID is a type of gas detector .
Typical photoionization detectors measure volatile organic compounds and other gases in concentrations from sub parts per billion to 10 000 parts per million (ppm). The photoionization detector is an efficient and inexpensive detector for many gas and vapor analytes. PIDs produce instantaneous readings, operate continuously, and are commonly used as detectors for gas chromatography or as hand-held portable instruments. Hand-held, battery-operated versions are widely used in military, industrial, and confined working facilities for health and safety. Their primary use is for monitoring possible worker exposure to volatile organic compounds (VOCs) such as solvents, fuels, degreasers, plastics and their precursors, heat transfer fluids, lubricants, etc. during manufacturing processes and waste handling.
Portable PIDs are used for monitoring:
In a photoionization detector, high-energy photons , typically in the vacuum ultraviolet (VUV) range, break molecules into positively charged ions . [ 2 ] As compounds enter the detector they are bombarded by high-energy UV photons and are ionized when they absorb the UV light, resulting in ejection of electrons and the formation of positively charged ions. The ions produce an electric current , which is the signal output of the detector . The greater the concentration of the component, the more ions are produced, and the greater the current. The current is amplified and displayed on an ammeter or digital concentration display. The ions can undergo numerous reactions including reaction with oxygen or water vapor, rearrangement, and fragmentation. A few of them may recapture an electron within the detector to reform their original molecules; however only a small portion of the airborne analytes are ionized to begin with so the practical impact of this (if it occurs) is usually negligible. Thus, PIDs are non-destructive and can be used before other sensors in multiple-detector configurations.
The PID will only respond to components that have ionization energies similar to or lower than the energy of the photons produced by the PID lamp. [ 3 ] As stand-alone detectors, PIDs are broad band and not selective, as these may ionize everything with an ionization energy less than or equal to the lamp photon energy. The more common commercial lamps have photons energy upper limits of approximately 8.4 eV, 10.0 eV, 10.6 eV, and 11.7 eV. The major and minor components of clean air all have ionization energies above 12.0 eV and thus do not interfere significantly in the measurement of VOCs, which typically have ionization energies below 12.0 eV. [ 4 ]
PID lamp photon emissions depend on the type of fill gas (which defines the light energy produced) and the lamp window, which affects the energy of photons that can exit the lamp:
The 10.6 eV lamp is the most common because it has strong output, has the longest life and responds to many compounds. In approximate order from most sensitive to least sensitive, these compounds include:
The first commercial application of photoionization detection was in 1973 as a hand-held instrument for the purpose of detecting leaks of VOCs, specifically vinyl chloride monomer (VCM), at a chemical manufacturing facility. The photoionization detector was applied to gas chromatography (GC) three years later, in 1976. [ 5 ] A PID is highly selective when coupled with a chromatographic technique or a pre-treatment tube such as a benzene-specific tube. Broader cuts of selectivity for easily ionized compounds can be obtained by using a lower energy UV lamp. This selectivity can be useful when analyzing mixtures in which only some of the components are of interest.
The PID is usually calibrated using isobutylene , and other analytes may produce a relatively greater or lesser response on a concentration basis. Although many PID manufacturers provide the ability to program an instrument with a correction factor for quantitative detection of a specific chemical, the broad selectivity of the PID means that the user must know the identity of the gas or vapor species to be measured with high certainty. [ 4 ] If a correction factor for benzene is entered into the instrument, but hexane vapor is measured instead, the lower relative detector response (higher correction factor) for hexane would lead to underestimation of the actual airborne concentration of hexane.
With a gas chromatograph, filter tube, or other separation technique upstream of the PID, matrix effects are generally avoided because the analyte enters the detector isolated from interfering compounds.
Response to stand-alone PIDs is generally linear from the ppb range up to at least a few thousand ppm. In this range, response to mixtures of components is also linearly additive. [ 4 ] At the higher concentrations, response gradually deviates from linearity because of recombination of oppositely charged ions formed in close proximity and/or 2) absorption of UV light without ionization. [ 4 ] The signal produced by a PID may be quenched when measuring in high humidity environments, [ 6 ] or when a compound such as methane is present in high concentrations of ≥1% by volume [ 7 ] This attenuation is due to the ability of water, methane, and other compounds with high ionization energies to absorb the photons emitted by the UV lamp without leading to the production of an ion current. This reduces the number of energetic photons available to ionize target analytes. | https://en.wikipedia.org/wiki/Photoionization_detector |
In chemistry , photoisomerization is a form of isomerization induced by photoexcitation. [ 2 ] Both reversible and irreversible photoisomerizations are known for photoswitchable compounds. The term "photoisomerization" usually, however, refers to a reversible process.
Photoisomerization of the compound retinal in the eye allows for vision.
Photoisomerizable substrates have been put to practical use, for instance, in pigments for rewritable CDs , DVDs , and 3D optical data storage solutions. In addition, interest in photoisomerizable molecules has been aimed at molecular devices, such as molecular switches , [ 3 ] [ 4 ] molecular motors , [ 5 ] and molecular electronics .
Another class of device that uses the photoisomerization process is as an additive in liquid crystals to change their linear and nonlinear properties. [ 6 ] Due to the photoisomerization is possible to induce a molecular reorientation in the liquid crystal bulk, which is used in holography , [ 7 ] as spatial filter [ 8 ] or optical switching. [ 9 ]
Azobenzenes, [ 1 ] stilbenes , [ 10 ] spiropyrans , [ 11 ] are prominent classes of compounds subject to photoisomerism.
In the presence of a catalyst, norbornadiene converts to quadricyclane via ~300nm UV radiation . When converted back to norbornadiene, quadryicyclane’s ring strain energy is liberated in the form of heat ( ΔH = −89 kJ/mol). This reaction has been proposed to store solar energy ( photoswitchs ). [ 12 ]
Photoisomerization behavior can be roughly categorized into several classes. Two major classes are trans – cis (or E – Z ) conversion, and open-closed ring transition. Examples of the former include stilbene and azobenzene . This type of compounds has a double bond , and rotation or inversion around the double bond affords isomerization between the two states. [ 13 ] Examples of the latter include fulgide and diarylethene . This type of compounds undergoes bond cleavage and bond creation upon irradiation with particular wavelengths of light. Still another class is the di-π-methane rearrangement .
Many complexes are often photosensitive and many of these complexes undergo photoisomerization. [ 14 ] One case is the conversion of the colorless cis- bis(triphenylphosphine)platinum chloride to the yellow trans isomer.
Some coordination complexes undergo change in their spin state upon illumination, i.e. these are photosensitive spin crossover complexes. [ 15 ] | https://en.wikipedia.org/wiki/Photoisomerization |
Photokinesis is a change in the velocity of movement of an organism as a result of changes in light intensity. [ 1 ] The alteration in speed is independent of the direction from which the light is shining. Photokinesis is described as positive if the velocity of travel is greater with an increase in light intensity and negative if the velocity is slower. [ 2 ] If a group of organisms with a positive photokinetic response is swimming in a partially shaded environment, there will be fewer organisms per unit of volume in the sunlit portion than in the shaded parts. [ 3 ] This may be beneficial for the organisms if it is unfavourable to their predators , or it may be propitious to them in their quest for prey. [ 4 ]
In photosynthetic prokaryotes , the mechanism for photokinesis appears to be an energetic process. In cyanobacteria , for example, an increase in illumination results in an increase of photophosphorylation which enables an increase in metabolic activity. However the behaviour is also found among eukaryotic microorganisms, including those like Astasia longa which are not photosynthetic, and in these, the mechanism is not fully understood. [ 2 ] In Euglena gracilis , the rate of swimming has been shown to speed up with increased light intensity until the light reaches a certain saturation level, beyond which the swimming rate declines. [ 5 ]
The sea slug Discodoris boholiensis also displays positive photokinesis; it is nocturnal and moves slowly at night, but much faster when caught in the open during daylight hours. Moving faster in the exposed environment should reduce predation and enable it to conceal itself as soon as possible, but its brain is quite incapable of working this out. [ 6 ] Photokinesis is common in tunicate larvae, which accumulate in areas with low light intensity just before settlement, [ 7 ] and the behaviour is also present in juvenile fish such as sockeye salmon smolts. [ 8 ] | https://en.wikipedia.org/wiki/Photokinesis |
A photolabile protecting group ( PPG ; also known as: photoremovable, photosensitive, or photocleavable protecting group ) is a chemical modification to a molecule that can be removed with light . PPGs enable high degrees of chemoselectivity as they allow researchers to control spatial, temporal and concentration variables with light. Control of these variables is valuable as it enables multiple PPG applications, including orthogonality in systems with multiple protecting groups. As the removal of a PPG does not require chemical reagents , the photocleavage of a PPG is often referred to as "traceless reagent processes", and is often used in biological model systems and multistep organic syntheses . [ 1 ] [ 2 ] [ 3 ] Since their introduction in 1962, [ 4 ] numerous PPGs have been developed and utilized in a variety of wide-ranging applications from protein science [ 5 ] to photoresists. Due to the large number of reported protecting groups, PPGs are often categorized by their major functional group(s); three of the most common classifications are detailed below.
The first reported use of a PPG in the scientific literature was by Barltrop and Schofield, who in 1962 used 253.7 nm light to release glycine from N-benzylglycine . [ 4 ] Following this initial report, the field rapidly expanded throughout the 1970s as Kaplan [ 6 ] and Epstein [ 7 ] studied PPGs in a variety of biochemical systems. During this time, a series of standards for evaluating PPG performance was compiled. An abbreviated list of these standards, which are commonly called the Lester rules, [ 8 ] or Sheehan criteria [ 9 ] are summarized below:
Nitrobenzyl-based PPGs are often considered the most commonly used PPGs. [ 2 ] [ 3 ] These PPGs are traditionally identified as Norrish Type II reaction as their mechanism was first described by Norrish in 1935. [ 10 ] Norrish elucidated that an incident photon (200 nm < λ < 320 nm) breaks the N=O π-bond in the nitro-group , bringing the protected substrate into a diradical excited state . Subsequently, the nitrogen radical abstracts a proton from the benzylic carbon, forming the aci -nitro compound. Depending on pH, solvent and the extent of substitution, the aci-nitro intermediate decays at a rate of roughly 10 2 –10 4 s −1 . [ 2 ] Following resonance of the π-electrons, a five-membered ring is formed before the PPG is cleaved yielding 2-nitrosobenzaldehyde and a carboxylic acid .
Overall, nitrobenzyl-based PPGs are highly general. The list of functional groups that can be protected include, but are not limited to, phosphates , carboxylates , carbonates , carbamates , thiolates , phenolates and alkoxides . [ 2 ] Additionally, while the rate varies with a number of variables, including choice of solvent and pH , the photodeprotection has been exhibited in both solution and in the solid-state . Under optimal conditions, the photorelease can proceed with >95% yield. [ 2 ] Nevertheless, the photoproducts of this PPG are known to undergo imine formation when irradiated at wavelengths above 300 nm. [ 11 ] [ 12 ] [ 13 ] This side product often competes for incident radiation, which may lead to decreased chemical and quantum yields.
In attempts to raise the chemical and quantum yields of nitrobenzyl-based PPGs, several beneficial modifications have been identified. The largest increase in quantum yield and reaction rate can be achieved through substitution at the benzylic carbon. [ 14 ] However, potential substitutions must leave one hydrogen atom so the photodegradation can proceeded uninhibited.
Additional modifications have targeted the aromatic chromophore. Specifically, multiple studies have confirmed that the use of a 2,6-dinitrobenzyl PPG increases reaction yield. [ 15 ] [ 16 ] [ 17 ] [ 18 ] Additionally, depending on the leaving group, the presence of a second nitro-group may nearly quadruple the quantum yield (e.g. Φ = 0.033 to Φ = 0.12 when releasing a carbonate at 365 nm). [ 2 ] [ 19 ] While one may credit the increase in efficiency to the electronic effects of the second nitro group, this is not the case. Analogous systems with a 2-cyano-6-nitrobenzyl PPG exhibit similar electron-withdrawing effects, but do not provide such a large increase in efficiency. Therefore, the increase in efficiency is likely due to the increased probability of achieving the aci- nitro state; with two nitro groups, an incoming photon will be twice as likely to promote the compound into an excited state.
Finally, changing the excitation wavelength of the PPG may be advantageous. For example, if two PPGs have different excitation wavelengths one group may be removed while the other is left in place. To this end, several nitrobenzyl based PPGs display additional functionality. Common modifications include the use of 2-nitroveratryl (NV) [ 20 ] or 6-nitropiperonylmethyl (NP). [ 21 ] Both of these modifications induced red-shifting in the compounds' absorption spectra. [ 20 ]
The phenacyl PPG is the archetypal example of a carbonyl-based PPG. [ 2 ] Under this motif, the PPG is attached to the protected substrate at the αβ-carbon , and can exhibit varied photodeprotection mechanisms based on the phenacyl skeleton, substrate identify and reaction conditions. [ 22 ] [ 23 ] [ 24 ] [ 25 ] Overall, phenacyl PPGs can be used to protect sulfonates , phosphates, carboxylates and carbamates.
As with nitrobenzyl-based PPGs, several modifications are known. For example, the 3',5'-dimethoxybenzoin PPG (DMB) contains a 3,5-dimethoxyphenyl substituent on the carbonyl's α-carbon. [ 19 ] Under certain conditions, DMB has exhibited quantum yields as high as 0.64. [ 2 ] Additionally, the p-hydroxyphenacyl PPG ( p HP) has been designed to react through a photo-Favorskii rearrangement . [ 26 ] [ 27 ] This mechanism yields the carboxylic acid as the exclusive photoproduct; the key benefit of the p HP PPG is the lack of secondary photoreactions and the significantly different UV absorption profiles of the products and reactants. While the quantum yield of the p -hydroxyphenacyl PPG is generally in the 0.1-0.4 range, it can increase to near unity when releasing a good leaving group such as a tosylate . The photoextrusion of the leaving group from the p HP PPG is so effective, that it also releases even poor nucleofuges such as amines (with the quantum yield in the 0.01-0.5 range, and dependent on solution pH ). [ 28 ] The Additionally, photorelease occurs on the nanosecond timeframe, with k release > 10 8 s −1 . [ 2 ] The o -hydroxyphenacyl PPG has been introduced as an alternative with absorption band shifted closer towards the visible region, however it has slightly lower quantum yields of deprotection (generally 0.1-0.3) due to excited state proton transfer available as an alternative deactivation pathway. [ 29 ]
The phenacyl moiety itself contains one chiral carbon atom in the backbone. The protected group ( leaving group ) is not directly attached to this chiral carbon atom, however has been shown to be able to work as a chiral auxiliary directing approach of a diene to a dienophile in a stereoselective thermal Diels–Alder reaction . [ 30 ] The auxiliary is then removed simply upon irradiation with UV light .
Another family of carbonyl-based PPGs exists that is structurally like the phenacyl motif, but which reacts through a separate mechanism. [ 31 ] [ 32 ] [ 33 ] As the name suggests, these PPGs react through abstraction of the carbonyl's γ-hydrogen. The compound is then able to undergo a photoenolization, which is mechanistically like a keto-enol tautomerization . From the enol form, the compound can finally undergo a ground-state transformation that releases the substrate. The quantum yield of this mechanism directly corresponds to the ability of the protected substrate to be a good leaving group . For good leaving groups, the rate-determining step is either hydrogen abstraction or isomerization ; however, if the substrate is a poor leaving group, release is the rate-determining step.
Barltrop and Schofield first demonstrated the use of a benzyl-based PPG, [ 4 ] structural variations have focused on substitution to the benzene ring, as well as extension of the aromatic core. For example, insertion of a m,m’-dimethoxy substituent was shown to increase the chemical yield ~75% due to what has been termed the “excited state meta effect.” [ 2 ] [ 34 ] [ 35 ] However, this substitution is only able to release good leaving groups such as carbamates and carboxylates. Additionally, the addition of an o -hydroxy group enables the release of alcohols , phenols and carboxylic acids due to the proximity of the phenolic hydroxy to the benzylic leaving group. [ 36 ] [ 37 ] Finally, the carbon skeleton has been expanded to include PPGs based on naphthalene , [ 38 ] anthracene , [ 39 ] phenanthrene , [ 40 ] pyrene [ 41 ] and perylene [ 42 ] cores, resulting in varied chemical and quantum yields, as well as irradiation wavelengths and times.
Despite their many advantages, the use of PPGs in total syntheses are relatively rare. [ 43 ] Nevertheless, PPGs’ "orthogonality" to common synthetic reagents, as well as the possibility of conducting a "traceless reagent process", has proven useful in natural product synthesis. Two examples include the syntheses of ent-Fumiquinazoline [ 44 ] and (-)- diazonamide A . [ 45 ] The syntheses required irradiation at 254 and 300 nm, respectively.
Protecting a substrate with a PPG is commonly referred to as "photocaging." This term is especially popular in biological systems. For example, Ly et al. developed a p -iodobenzoate-based photocaged reagent, which would experience a homolytic photoclevage of the C-I bond. [ 46 ] They found that the reaction could occur with excellent yields, and with a half-life of 2.5 minutes when a 15 W 254 nm light source was used. The resulting biomolecular radicals are necessary in many enzymatic processes. As a second example, researchers synthesized a cycloprene-modified glutamate photocaged with a 2-nitroveratrol-based PPG. As it is an excitatory amino acid neurotransmitter , the aim was to develop a bioorthagonal probe for glutamate in vivo . [ 47 ] In a final example, Venkatesh et al. demonstrated the use of a PPG-based photocaged therapeutic. [ 48 ] Their prodrug , which released one equivalent of caffeic acid and chlorambucil upon phototriggering, showed reasonable biocompatibility , cellular uptake and photoregulared drug release in vitro .
During the 1980s, AT&T Bell Laboratories explored the use of nitrobenzyl-based PPGs as photoresists . [ 49 ] [ 50 ] [ 51 ] [ 52 ] Over the course of the decade, they developed a deep UV positive-tone photoresist where the protected substrate was added to a copolymer of poly(methyl methacrylate) and poly(methacrylic acid) . Initially, the blend was insoluble. However, upon exposure to 260 ± 20 nm light, the PPG would be removed yielding 2-nitrosobenzaldehyde and a carboxylic acid that was soluble in aqueous base.
When covalently attached to a surface, PPGs do not exhibit any surface-induced properties (i.e. they behave like PPGs in solution, and do not exhibit any new properties because of their proximity to a surface). [ 53 ] Consequently, PPGs can be patterned on a surface and removed in manner analogous to lithography to create a multifunctionalized surface. [ 54 ] This process was first reported by Solas in 1991; [ 55 ] protected nucleotides were attached to a surface and spatially-resolved single stranded polynucleotides were generated in a step-wise “grafting from” method. In separate studies, there have been multiple reports of using PPGs to enable the selective separation of blocks within block-copolymers to expose fresh surfaces. [ 56 ] [ 57 ] [ 58 ] Furthermore, this surface patterning method has since been extended to proteins. [ 59 ] [ 60 ] Caged etching agents (such as hydrogen fluoride protected with 4-hydroxyphenacyl) allows to etch only surfaces exposed to light. [ 61 ]
Various PPGs, often featuring the 2-nitrobenzyl motif, have been used to generate numerous gels. [ 54 ] In one example, researchers incorporated PPGs into a silica -based sol-gel . [ 62 ] In a second example, a hydrogel was synthesized to include protected Ca 2+ ions. [ 63 ] [ 64 ] Finally, PPGs have been utilized to cross-link numerous photodegradable polymers , which have featured linear, multi-dimensional network, dendrimer, and branched structures. [ 58 ] [ 65 ] [ 66 ] [ 67 ] [ 68 ] | https://en.wikipedia.org/wiki/Photolabile_protecting_group |
Photolithography (also known as optical lithography ) is a process used in the manufacturing of integrated circuits . It involves using light to transfer a pattern onto a substrate, typically a silicon wafer .
The process begins with a photosensitive material, called a photoresist , being applied to the substrate. A photomask that contains the desired pattern is then placed over the photoresist. Light is shone through the photomask, exposing the photoresist in certain areas. The exposed areas undergo a chemical change, making them either soluble or insoluble in a developer solution. After development, the pattern is transferred onto the substrate through etching , chemical vapor deposition , or ion implantation processes.
Ultraviolet (UV) light is typically used. [ 1 ]
Photolithography processes can be classified according to the type of light used, including ultraviolet lithography, deep ultraviolet lithography, extreme ultraviolet lithography (EUVL) , and X-ray lithography . The wavelength of light used determines the minimum feature size that can be formed in the photoresist.
Photolithography is the most common method for the semiconductor fabrication of integrated circuits ("ICs" or "chips"), such as solid-state memories and microprocessors . It can create extremely small patterns, down to a few nanometers in size. It provides precise control of the shape and size of the objects it creates. It can create patterns over an entire wafer in a single step, quickly and with relatively low cost. In complex integrated circuits, a wafer may go through the photolithographic cycle as many as 50 times. It is also an important technique for microfabrication in general, such as the fabrication of microelectromechanical systems . However, photolithography cannot be used to produce masks on surfaces that are not perfectly flat. And, like all chip manufacturing processes, it requires extremely clean operating conditions.
Photolithography is a subclass of microlithography , the general term for processes that generate patterned thin films. Other technologies in this broader class include the use of steerable electron beams , or more rarely, nanoimprinting , interference , magnetic fields , or scanning probes . On a broader level, it may compete with directed self-assembly of micro- and nanostructures. [ 2 ]
Photolithography shares some fundamental principles with photography in that the pattern in the photoresist is created by exposing it to light — either directly by projection through a lens , or by illuminating a mask placed directly over the substrate, as in contact printing . The technique can also be seen as a high precision version of the method used to make printed circuit boards . The name originated from a loose analogy with the traditional photographic method of producing plates for lithographic printing on paper; [ 3 ] however, subsequent stages in the process have more in common with etching than with traditional lithography.
Conventional photoresists typically consist of three components: resin, sensitizer, and solvent.
The root words photo , litho , and graphy all have Greek origins, with the meanings 'light', 'stone' and 'writing' respectively. As suggested by the name compounded from them, photolithography is a printing method (originally based on the use of limestone printing plates) in which light plays an essential role.
In the 1820s, Nicephore Niepce invented a photographic process that used Bitumen of Judea , a natural asphalt, as the first photoresist . A thin coating of the bitumen on a sheet of metal, glass or stone became less soluble where it was exposed to light; the unexposed parts could then be rinsed away with a suitable solvent, baring the material beneath, which was then chemically etched in an acid bath to produce a printing plate. The light-sensitivity of bitumen was very poor and very long exposures were required, but despite the later introduction of more sensitive alternatives, its low cost and superb resistance to strong acids prolonged its commercial life into the early 20th century.
In 1940, Oskar Süß created a positive photoresist by using diazonaphthoquinone , which worked in the opposite manner: the coating was initially insoluble and was rendered soluble where it was exposed to light. [ 4 ] In 1954, Louis Plambeck Jr. developed the Dycryl polymeric letterpress plate, which made the platemaking process faster. [ 5 ] Development of photoresists used to be carried out in batches of wafers (batch processing) dipped into a bath of developer, but modern process offerings do development one wafer at a time (single wafer processing) to improve process control. [ 6 ]
In 1957 Jules Andrus patented a photolitographic process for semiconductor fabrication, while working at Bell Labs. [ 7 ] [ 8 ] At the same time Moe Abramson and Stanislaus Danko of the US Army Signal Corps developed a technique for printing circuits. [ 8 ]
In 1952, the U.S. military assigned Jay W. Lathrop and James R. Nall at the National Bureau of Standards (later the U.S. Army Diamond Ordnance Fuze Laboratory , which eventually merged to form the now-present Army Research Laboratory ) with the task of finding a way to reduce the size of electronic circuits in order to better fit the necessary circuitry in the limited space available inside a proximity fuze . [ 9 ] Inspired by the application of photoresist, a photosensitive liquid used to mark the boundaries of rivet holes in metal aircraft wings, Nall determined that a similar process can be used to protect the germanium in the transistors and even pattern the surface with light. [ 10 ] During development, Lathrop and Nall were successful in creating a 2D miniaturized hybrid integrated circuit with transistors using this technique. [ 9 ] In 1958, during the IRE Professional Group on Electron Devices (PGED) conference in Washington, D.C., they presented the first paper to describe the fabrication of transistors using photographic techniques and adopted the term "photolithography" to describe the process, marking the first published use of the term to describe semiconductor device patterning. [ 10 ] [ 3 ]
Despite the fact that photolithography of electronic components concerns etching metal duplicates, rather than etching stone to produce a "master" as in conventional lithographic printing, Lathrop and Nall chose the term "photolithography" over "photoetching" because the former sounded "high tech." [ 9 ] A year after the conference, Lathrop and Nall's patent on photolithography was formally approved on June 9, 1959. [ 11 ] Photolithography would later contribute to the development of the first semiconductor ICs as well as the first microchips. [ 9 ]
A single iteration of photolithography combines several steps in sequence. Modern cleanrooms use automated, robotic wafer track systems to coordinate the process. [ 12 ] The procedure described here omits some advanced treatments, such as thinning agents. [ 13 ] The photolithography process is carried out by the wafer track and stepper/scanner, and the wafer track system and the stepper/scanner are installed side by side. Wafer track systems are also known as wafer coater/developer systems, which perform the same functions. [ 14 ] [ 15 ] Wafer tracks are named after the "tracks" used to carry wafers inside the machine, [ 16 ] but modern machines do not use tracks. [ 15 ]
If organic or inorganic contaminations are present on the wafer surface, they are usually removed by wet chemical treatment, e.g. the RCA clean procedure based on solutions containing hydrogen peroxide . Other solutions made with trichloroethylene, acetone or methanol can also be used to clean. [ 17 ]
The wafer is initially heated to a temperature sufficient to drive off any moisture that may be present on the wafer surface; 150 °C for ten minutes is sufficient. Wafers that have been in storage must be chemically cleaned to remove contamination . A liquid or gaseous "adhesion promoter", such as Bis(trimethylsilyl)amine ("hexamethyldisilazane", HMDS) , is applied to promote adhesion of the photoresist to the wafer. The surface layer of silicon dioxide on the wafer reacts with HMDS to form tri-methylated silicon-dioxide, a highly water repellent layer not unlike the layer of wax on a car's paint. This water repellent layer prevents the aqueous developer from penetrating between the photoresist layer and the wafer's surface, thus preventing so-called lifting of small photoresist structures in the (developing) pattern. In order to ensure the development of the image, it is best covered and placed over a hot plate and let it dry while stabilizing the temperature at 120 °C. [ 18 ]
The wafer is covered with photoresist liquid by spin coating . Thus, the top layer of resist is quickly ejected from the wafer's edge while the bottom layer still creeps slowly radially along the wafer. In this way, any 'bump' or 'ridge' of resist is removed, leaving a very flat layer. However, viscous films may result in large edge beads which are areas at the edges of the wafer or photomask [ 19 ] with increased resist thickness whose planarization has physical limits. [ 20 ] Often, Edge bead removal (EBR) is carried out, usually with a nozzle, to remove this extra resist as it could otherwise cause particulate contamination. [ 21 ] [ 22 ] [ 23 ] Final thickness is also determined by the evaporation of liquid solvents from the resist. For very small, dense features (< 125 or so nm), lower resist thicknesses (< 0.5 microns) are needed to overcome collapse effects at high aspect ratios; typical aspect ratios are < 4:1.
The photoresist-coated wafer is then prebaked to drive off excess photoresist solvent, typically at 90 to 100 °C for 30 to 60 seconds on a hotplate. [ 24 ] A BARC coating (Bottom Anti-Reflectant Coating) may be applied before the photoresist is applied, to avoid reflections from occurring under the photoresist and to improve the photoresist's performance at smaller semiconductor nodes such as 45 nm and below. [ 25 ] [ 26 ] [ 27 ] Top Anti-Reflectant Coatings (TARCs) also exist. [ 28 ] EUV lithography is unique in the sense it allows for the use of photoresists with metal oxides. [ 29 ]
After prebaking, the photoresist is exposed to a pattern of intense light. The exposure to light causes a chemical change that allows some of the photoresist to be removed by a special solution, called "developer" by analogy with photographic developer . Positive photoresist, the most common type, becomes soluble in the developer when exposed; with negative photoresist, unexposed regions are soluble in the developer.
A post-exposure bake (PEB) is performed before developing, typically to help reduce standing wave phenomena caused by the destructive and constructive interference patterns of the incident light. In deep ultraviolet lithography, chemically amplified resist (CAR) chemistry is used. This resist is much more sensitive to PEB time, temperature, and delay, as the resist works by creating acid when it is hit by photons, and then undergoes an "exposure" reaction (creating acid, making the polymer soluble in the basic developer, and performing a chemical reaction catalyzed by acid) which mostly occurs in the PEB. [ 30 ] [ 31 ]
The develop chemistry is delivered on a spinner, much like photoresist. Developers originally often contained sodium hydroxide (NaOH). However, sodium is considered an extremely undesirable contaminant in MOSFET fabrication because it degrades the insulating properties of gate oxides (specifically, sodium ions can migrate in and out of the gate, changing the threshold voltage of the transistor and making it harder or easier to turn the transistor on over time). Metal-ion-free developers such as tetramethylammonium hydroxide (TMAH) are now used. The temperature of the developer might be tightly controlled using jacketed (dual walled) hoses to within 0.2 °C. [ 6 ] The nozzle that coats the wafer with developer may influence the amount of developer that is necessary. [ 32 ] [ 15 ]
The resulting wafer is then "hard-baked" if a non-chemically amplified resist was used, typically at 120 to 180 °C [ 33 ] for 20 to 30 minutes. The hard bake solidifies the remaining photoresist, to make a more durable protecting layer in future ion implantation , wet chemical etching , or plasma etching .
From preparation until this step, the photolithography procedure has been carried out by two machines: the photolithography stepper or scanner, and the coater/developer. The two machines are usually installed side by side, and are "linked" together. [ 34 ] [ 27 ] [ 35 ]
In etching, a liquid ("wet") or plasma ("dry") chemical agent removes the uppermost layer of the substrate in the areas that are not protected by photoresist. In semiconductor fabrication , dry etching techniques are generally used, as they can be made anisotropic , in order to avoid significant undercutting of the photoresist pattern. This is essential when the width of the features to be defined is similar to or less than the thickness of the material being etched (i.e. when the aspect ratio approaches unity). Wet etch processes are generally isotropic in nature, which is often indispensable for microelectromechanical systems , where suspended structures must be "released" from the underlying layer.
The development of low-defectivity anisotropic dry-etch process has enabled the ever-smaller features defined photolithographically in the resist to be transferred to the substrate material.
After a photoresist is no longer needed, it must be removed from the substrate. This usually requires a liquid "resist stripper", which chemically alters the resist so that it no longer adheres to the substrate. Alternatively, the photoresist may be removed by a plasma containing oxygen , which oxidizes it. This process is called plasma ashing and resembles dry etching. The use of 1-Methyl-2-pyrrolidone (NMP) solvent for photoresist is another method used to remove an image. When the resist has been dissolved, the solvent can be removed by heating to 80 °C without leaving any residue. [ 36 ]
Exposure systems typically produce an image on the wafer using a photomask . The photomask blocks light in some areas and lets it pass in others. ( Maskless lithography projects a precise beam directly onto the wafer without using a mask, but it is not widely used in commercial processes.) Exposure systems may be classified by the optics that transfer the image from the mask to the wafer.
Photolithography produces better thin film transistor structures than printed electronics , due to smoother printed layers, less wavy patterns, and more accurate drain-source electrode registration. [ 37 ]
A contact aligner, the simplest exposure system, puts a photomask in direct contact with the wafer [ 38 ] and exposes it to a uniform light. A proximity aligner puts a small gap of around 5 microns between the photomask and wafer. [ 38 ] In both cases, the mask covers the entire wafer, and simultaneously patterns every die.
Contact printing/lithography is liable to damage both the mask and the wafer, [ 38 ] and this was the primary reason it was abandoned for high volume production. Both contact and proximity lithography require the light intensity to be uniform across an entire wafer, and the mask to align precisely to features already on the wafer. As modern processes use increasingly large wafers, these conditions become increasingly difficult.
Research and prototyping processes often use contact or proximity lithography, because it uses inexpensive hardware and can achieve high optical resolution. The resolution in proximity lithography is approximately the square root of the product of the wavelength and the gap distance. Hence, except for projection lithography (see below), contact printing offers the best resolution, because its gap distance is approximately zero (neglecting the thickness of the photoresist itself). In addition, nanoimprint lithography may revive interest in this familiar technique, especially since the cost of ownership is expected to be low; however, the shortcomings of contact printing discussed above remain as challenges.
Very-large-scale integration (VLSI) lithography uses projection systems. Unlike contact or proximity masks, which cover an entire wafer, projection masks (known as "reticles") show only one die or an array of dies (known as a "field") in a portion of the wafer at a time. Projection exposure systems (steppers or scanners) project the mask onto the wafer many times, changing the position of the wafer with every projection, to create the complete pattern, fully patterning the wafer. The difference between steppers and scanners is that, during exposure, a scanner moves the photomask and the wafer simultaneously, while a stepper only moves the wafer. Contact, proximity and projection Mask aligners preceded steppers [ 39 ] [ 40 ] and do not move the photomask nor the wafer during exposure and use masks that cover the entire wafer. Immersion lithography scanners use a layer of Ultrapure water between the lens and the wafer to increase resolution. An alternative to photolithography is nanoimprint lithography . The maximum size of the image that can be projected onto a wafer is known as the reticle limit.
The image for the mask originates from a computerized data file. This data file is converted to a series of polygons and written onto a square of fused quartz substrate covered with a layer of chromium using a photolithographic process. A laser beam (laser writer) or a beam of electrons (e-beam writer) is used to expose the pattern defined by the data file and travels over the surface of the substrate in either a vector or raster scan manner. Where the photoresist on the mask is exposed, the chrome can be etched away, leaving a clear path for the illumination light in the stepper/scanner system to travel through.
The ability to project a clear image of a small feature onto the wafer is limited by the wavelength of the light that is used, and the ability of the reduction lens system to capture enough diffraction orders from the illuminated mask. Current state-of-the-art photolithography tools use deep ultraviolet (DUV) light from excimer lasers with wavelengths of 248 (KrF) and
193 (ArF) nm (the dominant lithography technology today is thus also called " excimer laser lithography "), which allow minimum feature sizes down to 50 nm. Excimer laser lithography has thus played a critical role in the continued advance of the Moore's Law for the last 20 years (see below [ 41 ] ).
The minimum feature size that a projection system can print is given approximately by:
where C D {\displaystyle \,CD} is the minimum feature size (also called the critical dimension , target design rule , or " half-pitch "), λ {\displaystyle \,\lambda } is the wavelength of light used, and N A {\displaystyle \,NA} is the numerical aperture of the lens as seen from the wafer.
k 1 {\displaystyle \,k_{1}} (commonly called k1 factor ) is a coefficient that encapsulates process-related factors and typically equals 0.4 for production. ( k 1 {\displaystyle \,k_{1}} is actually a function of process factors such as the angle of incident light on a reticle and the incident light intensity distribution. It is fixed per process.) The minimum feature size can be reduced by decreasing this coefficient through computational lithography .
According to this equation, minimum feature sizes can be decreased by decreasing the wavelength, and increasing the numerical aperture (to achieve a tighter focused beam and a smaller spot size). However, this design method runs into a competing constraint. In modern systems, the depth of focus is also a concern:
Here, k 2 {\displaystyle \,k_{2}} is another process-related coefficient. The depth of focus restricts the thickness of the photoresist and the depth of the topography on the wafer. Chemical mechanical polishing is often used to flatten topography before high-resolution lithographic steps.
From classical optics, k1=0.61 by the Rayleigh criterion . [ 42 ] The image of two points separated by less than 1.22 wavelength/NA will not maintain that separation but will be larger due to the interference between the Airy discs of the two points. It must also be remembered, though, that the distance between two features can also change with defocus. [ 43 ]
Resolution is also nontrivial in a two-dimensional context. For example, a tighter line pitch results in wider gaps (in the perpendicular direction) between the ends of the lines. [ 44 ] [ 45 ] More fundamentally, straight edges become rounded for shortened rectangular features, where both x and y pitches are near the resolution limit. [ 46 ] [ 47 ] [ 48 ] [ 49 ]
For advanced nodes, blur, rather than wavelength, becomes the key resolution-limiting factor. Minimum pitch is given by blur sigma/0.14. [ 50 ] Blur is affected by dose [ 51 ] [ 52 ] [ 53 ] as well as quantum yield, [ 54 ] leading to a tradeoff with stochastic defects, in the case of EUV. [ 55 ] [ 56 ] [ 57 ]
As light consists of photons , at low doses the image quality ultimately depends on the photon number. This affects the use of extreme ultraviolet lithography or EUVL, which is limited to the use of low doses on the order of 20 photons/nm 2 . [ 58 ] This is due to fewer photons for the same energy dose for a shorter wavelength (higher energy per photon). With fewer photons making up the image, there is noise in the edge placement. [ 59 ]
The stochastic effects would become more complicated with larger pitch patterns with more diffraction orders and using more illumination source points. [ 60 ] [ 61 ]
Secondary electrons in EUV lithography aggravate the stochastic characteristics. [ 62 ]
Historically, photolithography has used ultraviolet light from gas-discharge lamps using mercury , sometimes in combination with noble gases such as xenon . These lamps produce light across a broad spectrum with several strong peaks in the ultraviolet range. This spectrum is filtered to select a single spectral line . From the early 1960s through the mid-1980s, Hg lamps had been used in lithography for their spectral lines at 436 nm ("g-line"), 405 nm ("h-line") and 365 nm ("i-line"). However, with the semiconductor industry's need for both higher resolution (to produce denser and faster chips) and higher throughput (for lower costs), lamp-based lithography tools were no longer able to meet the industry's high-end requirements.
This challenge was overcome in 1982 when excimer laser lithography was proposed and demonstrated at IBM by Kanti Jain. [ 63 ] [ 64 ] [ 65 ] [ 66 ] Excimer laser lithography machines (steppers and scanners) became the primary tools in microelectronics production, and has enabled minimum features sizes in chip manufacturing to shrink from 800 nanometers in 1990 to 7 nanometers in 2018. [ 67 ] [ 68 ] From an even broader scientific and technological perspective, in the 50-year history of the laser since its first demonstration in 1960, the invention and development of excimer laser lithography has been recognized as a major milestone. [ 69 ] [ 70 ] [ 71 ]
The commonly used deep ultraviolet excimer lasers in lithography systems are the krypton fluoride (KrF) laser at 248 nm wavelength and the argon fluoride laser (ArF) at 193 nm wavelength. The primary manufacturers of excimer laser light sources in the 1980s were Lambda Physik (now part of Coherent, Inc.) and Lumonics. Since the mid-1990s Cymer Inc. has become the dominant supplier of excimer laser sources to the lithography equipment manufacturers, with Gigaphoton Inc. as their closest rival. Generally, an excimer laser is designed to operate with a specific gas mixture; therefore, changing wavelength is not a trivial matter, as the method of generating the new wavelength is completely different, and the absorption characteristics of materials change. For example, air begins to absorb significantly around the 193 nm wavelength; moving to sub-193 nm wavelengths would require installing vacuum pump and purge equipment on the lithography tools (a significant challenge). An inert gas atmosphere can sometimes be used as a substitute for a vacuum, to avoid the need for hard plumbing. Furthermore, insulating materials such as silicon dioxide , when exposed to photons with energy greater than the band gap, release free electrons and holes which subsequently cause adverse charging.
Optical lithography has been extended to feature sizes below 50 nm using the 193 nm ArF excimer laser and liquid immersion techniques. Also termed immersion lithography , this enables the use of optics with numerical apertures exceeding 1.0. The liquid used is typically ultra-pure, deionised water, which provides for a refractive index above that of the usual air gap between the lens and the wafer surface. The water is continually circulated to eliminate thermally-induced distortions. Water will only allow NA' s of up to ~1.4, but fluids with higher refractive indices would allow the effective NA to be increased further.
Experimental tools using the 157 nm wavelength from the F2 excimer laser in a manner similar to current exposure systems have been built. These were once targeted to succeed 193 nm lithography at the 65 nm feature size node but have now all but been eliminated by the introduction of immersion lithography. This was due to persistent technical problems with the 157 nm technology and economic considerations that provided strong incentives for the continued use of 193 nm excimer laser lithography technology. High-index immersion lithography is the newest extension of 193 nm lithography to be considered. In 2006, features less than 30 nm were demonstrated by IBM using this technique. [ 72 ] These systems used CaF 2 calcium fluoride lenses. [ 73 ] [ 74 ] Immersion lithography at 157 nm was explored. [ 75 ]
UV excimer lasers have been demonstrated to about 126 nm (for Ar 2 *). Mercury arc lamps are designed to maintain a steady DC current of 50 to 150 Volts, however excimer lasers have a higher resolution. Excimer lasers are gas-based light systems that are usually filled with inert and halide gases (Kr, Ar, Xe, F and Cl) that are charged by an electric field. The higher the frequency, the greater the resolution of the image. KrF lasers are able to function at a frequency of 4 kHz . In addition to running at a higher frequency, excimer lasers are compatible with more advanced machines than mercury arc lamps are. They are also able to operate from greater distances (up to 25 meters) and are able to maintain their accuracy with a series of mirrors and antireflective-coated lenses. By setting up multiple lasers and mirrors, the amount of energy loss is minimized, also since the lenses are coated with antireflective material, the light intensity remains relatively the same from when it left the laser to when it hits the wafer. [ 76 ]
Lasers have been used to indirectly generate non-coherent extreme UV (EUV) light at 13.5 nm for extreme ultraviolet lithography . The EUV light is not emitted by the laser, but rather by a tin or xenon plasma which is excited by an excimer or CO 2 laser. [ 77 ] This technique does not require a synchrotron, and EUV sources, as noted, do not produce coherent light. However vacuum systems and a number of novel technologies (including much higher EUV energies than are now produced) are needed to work with UV at the edge of the X-ray spectrum (which begins at 10 nm). As of 2020, EUV is in mass production use by leading edge foundries such as TSMC and Samsung.
Theoretically, an alternative light source for photolithography, especially if and when wavelengths continue to decrease to extreme UV or X-ray, is the free-electron laser (or one might say xaser for an X-ray device). Free-electron lasers can produce high quality beams at arbitrary wavelengths.
Visible and infrared femtosecond lasers were also applied for lithography. In that case photochemical reactions are initiated by multiphoton absorption. Usage of these light sources have a lot of benefits, including possibility to manufacture true 3D objects and process non-photosensitized (pure) glass-like materials with superb optical resiliency. [ 78 ]
Photolithography has been defeating predictions of its demise for many years. For instance, by the early 1980s, many in the semiconductor industry had come to believe that features smaller than 1 micron could not be printed optically. Modern techniques using excimer laser lithography already print features with dimensions a fraction of the wavelength of light used – an amazing optical feat. New techniques such as immersion lithography , dual-tone resist and multiple patterning continue to improve the resolution of 193 nm lithography. Meanwhile, current research is exploring alternatives to conventional UV, such as electron beam lithography , X-ray lithography , extreme ultraviolet lithography and ion projection lithography . Extreme ultraviolet lithography has entered mass production use, as of 2018 by Samsung [ 79 ] and other manufacturers have followed suit.
Massively parallel electron beam lithography has been explored as an alternative to photolithography, and was tested by TSMC, but it did not succeed and the technology from the main developer of the technique, MAPPER, was purchased by ASML, although electron beam lithography was at one point used in chip production by IBM. [ 80 ] [ 81 ] Electron beam lithography is only used in niche applications such as photomask production. [ 82 ] [ 83 ] [ 84 ] [ 85 ] [ 86 ]
In 2001 NIST publication has reported that photolithography process constituted about 35% of total cost of a wafer processing costs. [ 87 ] : 11
In 2021, the photolithography industry was valued over 8 billion USD. [ 88 ] | https://en.wikipedia.org/wiki/Photolithography |
Photoluminescence (abbreviated as PL ) is light emission from any form of matter after the absorption of photons (electromagnetic radiation). [ 1 ] It is one of many forms of luminescence (light emission) and is initiated by photoexcitation (i.e. photons that excite electrons to a higher energy level in an atom), hence the prefix photo- . [ 2 ] Following excitation, various relaxation processes typically occur in which other photons are re-radiated. Time periods between absorption and emission may vary: ranging from short femtosecond-regime for emission involving free-carrier plasma in inorganic semiconductors [ 3 ] up to milliseconds for phosphoresence processes in molecular systems; and under special circumstances delay of emission may even span to minutes or hours.
Observation of photoluminescence at a certain energy can be viewed as an indication that an electron populated an excited state associated with this transition energy.
While this is generally true in atoms and similar systems, correlations and other more complex phenomena also act as sources for photoluminescence in many-body systems such as semiconductors. A theoretical approach to handle this is given by the semiconductor luminescence equations .
Photoluminescence processes can be classified by various parameters such as the energy of the exciting photon with respect to the emission.
Resonant excitation describes a situation in which photons of a particular wavelength are absorbed and equivalent photons are very rapidly re-emitted. This is often referred to as resonance fluorescence . For materials in solution or in the gas phase , this process involves electrons but no significant internal energy transitions involving molecular features of the chemical substance between absorption and emission. In crystalline inorganic semiconductors where an electronic band structure is formed, secondary emission can be more complicated as events may contain both coherent contributions such as resonant Rayleigh scattering where a fixed phase relation with the driving light field is maintained (i.e. energetically elastic processes where no losses are involved), and incoherent contributions (or inelastic modes where some energy channels into an auxiliary loss mode), [ 4 ]
The latter originate, e.g., from the radiative recombination of excitons , Coulomb -bound electron-hole pair states in solids. Resonance fluorescence may also show significant quantum optical correlations. [ 4 ] [ 5 ] [ 6 ]
More processes may occur when a substance undergoes internal energy transitions before re-emitting the energy from the absorption event. Electrons change energy states by either resonantly gaining energy from absorption of a photon or losing energy by emitting photons. In chemistry -related disciplines, one often distinguishes between fluorescence and phosphorescence . The former is typically a fast process, yet some amount of the original energy is dissipated so that re-emitted light photons will have lower energy than did the absorbed excitation photons. The re-emitted photon in this case is said to be red shifted, referring to the reduced energy it carries following this loss (as the Jablonski diagram shows). For phosphorescence, electrons which absorbed photons, undergo intersystem crossing where they enter into a state with altered spin multiplicity (see term symbol ), usually a triplet state . Once the excited electron is transferred into this triplet state, electron transition (relaxation) back to the lower singlet state energies is quantum mechanically forbidden, meaning that it happens much more slowly than other transitions. The result is a slow process of radiative transition back to the singlet state, sometimes lasting minutes or hours. This is the basis for "glow in the dark" substances.
Photoluminescence is an important technique for measuring the purity and crystalline quality of semiconductors such as GaN and InP and for quantification of the amount of disorder present in a system. [ 7 ]
Time-resolved photoluminescence (TRPL) is a method where the sample is excited with a light pulse and then the decay in photoluminescence with respect to time is measured. This technique is useful for measuring the minority carrier lifetime of III-V semiconductors like gallium arsenide ( GaAs ).
In a typical PL experiment, a semiconductor is excited with a light-source that provides photons with an energy larger than the bandgap energy.
The incoming light excites a polarization that can be described with the semiconductor Bloch equations . [ 8 ] [ 9 ] Once the photons are absorbed, electrons and holes are formed with finite momenta k {\displaystyle \mathbf {k} } in the conduction and valence bands , respectively. The excitations then undergo energy and momentum relaxation towards the band-gap minimum. Typical mechanisms are Coulomb scattering and the interaction with phonons . Finally, the electrons recombine with holes under emission of photons.
Ideal, defect-free semiconductors are many-body systems where the interactions of charge-carriers and lattice vibrations have to be considered in addition to the light-matter coupling. In general, the PL properties are also extremely sensitive to internal electric fields and to the dielectric environment (such as in photonic crystals ) which impose further degrees of complexity. A precise microscopic description is provided by the semiconductor luminescence equations . [ 8 ]
An ideal, defect-free semiconductor quantum well structure is a useful model system to illustrate the fundamental processes in typical PL experiments. The discussion is based on results published in Klingshirn (2012) [ 10 ] and Balkan (1998). [ 11 ]
The fictive model structure for this discussion has two confined quantized electronic and two hole subbands , e 1 , e 2 and h 1 , h 2 , respectively.
The linear absorption spectrum of such a structure shows the exciton resonances of the first (e1h1) and the second quantum well subbands (e 2 , h 2 ), as well as the absorption from the corresponding continuum states and from the barrier.
In general, three different excitation conditions are distinguished: resonant, quasi-resonant, and non-resonant. For the resonant excitation, the central energy of the laser corresponds to the lowest exciton resonance of the quantum well . No, or only a negligible amount of the excess, energy is injected to the carrier system. For these conditions, coherent processes contribute significantly to the spontaneous emission. [ 4 ] [ 12 ] The decay of polarization creates excitons directly. The detection of PL is challenging for resonant excitation as it is difficult to discriminate contributions from the excitation, i.e., stray-light and diffuse scattering from surface roughness. Thus, speckle and resonant Rayleigh-scattering are always superimposed to the incoherent emission.
In case of the non-resonant excitation, the structure is excited with some excess energy. This is the typical situation used in most PL experiments as the excitation energy can be discriminated using a spectrometer or an optical filter . One has to distinguish between quasi-resonant excitation and barrier excitation.
For quasi-resonant conditions, the energy of the excitation is tuned above the ground state but still below the barrier absorption edge , for example, into the continuum of the first subband. The polarization decay for these conditions is much faster than for resonant excitation and coherent contributions to the quantum well emission are negligible. The initial temperature of the carrier system is significantly higher than the lattice temperature due to the surplus energy of the injected carriers. Finally, only the electron-hole plasma is initially created. It is then followed by the formation of excitons. [ 13 ] [ 14 ]
In case of barrier excitation, the initial carrier distribution in the quantum well strongly depends on the carrier scattering between barrier and the well.
Initially, the laser light induces coherent polarization in the sample, i.e., the transitions between electron and hole states oscillate with the laser frequency and a fixed phase. The polarization dephases typically on a sub-100 fs time-scale in case of nonresonant excitation due to ultra-fast Coulomb- and phonon-scattering. [ 15 ]
The dephasing of the polarization leads to creation of populations of electrons and holes in the conduction and the valence bands, respectively. The lifetime of the carrier populations is rather long, limited by radiative and non-radiative recombination such as Auger recombination . During this lifetime a fraction of electrons and holes may form excitons, this topic is still controversially discussed in the literature. The formation rate depends on the experimental conditions such as lattice temperature, excitation density, as well as on the general material parameters, e.g., the strength of the Coulomb-interaction or the exciton binding energy.
The characteristic time-scales are in the range of hundreds of picoseconds in GaAs; [ 13 ] they appear to be much shorter in wide-gap semiconductors . [ 16 ]
Directly after the excitation with short (femtosecond) pulses and the quasi-instantaneous decay of the polarization, the carrier distribution is mainly determined by the spectral width of the excitation, e.g., a laser pulse. The distribution is thus highly non-thermal and resembles a Gaussian distribution , centered at a finite momentum. In the first hundreds of femtoseconds , the carriers are scattered by phonons, or at elevated carrier densities via Coulomb-interaction. The carrier system successively relaxes to the Fermi–Dirac distribution typically within the first picosecond. Finally, the carrier system cools down under the emission of phonons. This can take up to several nanoseconds , depending on the material system, the lattice temperature, and the excitation conditions such as the surplus energy.
Initially, the carrier temperature decreases fast via emission of optical phonons . This is quite efficient due to the comparatively large energy associated with optical phonons, (36meV or 420K in GaAs) and their rather flat dispersion, allowing for a wide range of scattering processes under conservation of energy and momentum. Once the carrier temperature decreases below the value corresponding to the optical phonon energy, acoustic phonons dominate the relaxation. Here, cooling is less efficient due their dispersion and small energies and the temperature decreases much slower beyond the first tens of picoseconds. [ 17 ] [ 18 ] At elevated excitation densities, the carrier cooling is further inhibited by the so-called hot-phonon effect . [ 19 ] The relaxation of a large number of hot carriers leads to a high generation rate of optical phonons which exceeds the decay rate into acoustic phonons. This creates a non-equilibrium "over-population" of optical phonons and thus causes their increased reabsorption by the charge-carriers significantly suppressing any cooling. Thus, a system cools slower, the higher the carrier density is.
The emission directly after the excitation is spectrally very broad, yet still centered in the vicinity of the strongest exciton resonance. As the carrier distribution relaxes and cools, the width of the PL peak decreases and the emission energy shifts to match the ground state of the exciton (such as an electron) for ideal samples without disorder. The PL spectrum approaches its quasi-steady-state shape defined by the distribution of electrons and holes. Increasing the excitation density will change the emission spectra. They are dominated by the excitonic ground state for low densities. Additional peaks from higher subband transitions appear as the carrier density or lattice temperature are increased as these states get more and more populated. Also, the width of the main PL peak increases significantly with rising excitation due to excitation-induced dephasing [ 20 ] and the emission peak experiences a small shift in energy due to the Coulomb-renormalization and phase-filling. [ 9 ]
In general, both exciton populations and plasma, uncorrelated electrons and holes, can act as sources for photoluminescence as described in the semiconductor-luminescence equations . Both yield very similar spectral features which are difficult to distinguish; their emission dynamics, however, vary significantly. The decay of excitons yields a single-exponential decay function since the probability of their radiative recombination does not depend on the carrier density. The probability of spontaneous emission for uncorrelated electrons and holes, is approximately proportional to the product of electron and hole populations eventually leading to a non-single-exponential decay described by a hyperbolic function .
Real material systems always incorporate disorder. Examples are structural defects [ 21 ] in the lattice or disorder due to variations of the chemical composition. Their treatment is extremely challenging for microscopic theories due to the lack of detailed knowledge about perturbations of the ideal structure. Thus, the influence of the extrinsic effects on the PL is usually addressed phenomenologically. [ 22 ] In experiments, disorder can lead to localization of carriers and hence drastically increase the photoluminescence life times as localized carriers cannot as easily find nonradiative recombination centers as can free ones.
Researchers from the King Abdullah University of Science and Technology (KAUST) have studied the photoinduced entropy (i.e. thermodynamic disorder) of InGaN / GaN p-i-n double-heterostructure and AlGaN nanowires using temperature-dependent photoluminescence. [ 7 ] [ 23 ] They defined the photoinduced entropy as a thermodynamic quantity that represents the unavailability of a system's energy for conversion into useful work due to carrier recombination and photon emission. They have also related the change in entropy generation to the change in photocarrier dynamics in the nanowire active regions using results from time-resolved photoluminescence study. They hypothesized that the amount of generated disorder in the InGaN layers eventually increases as the temperature approaches room temperature because of the thermal activation of surface states , while an insignificant increase was observed in AlGaN nanowires, indicating lower degrees of disorder-induced uncertainty in the wider bandgap semiconductor. To study the photoinduced entropy , the scientists have developed a mathematical model that considers the net energy exchange resulting from photoexcitation and photoluminescence.
In phosphor thermometry , the temperature dependence of the photoluminescence process is exploited to measure temperature.
Photoluminescence spectroscopy is a widely used technique for characterisation of the optical and electronic properties of semiconductors and molecules. The technique itself is fast, contactless, and nondestructive. Therefore, it can be used to study the optoelectronic properties of materials of various sizes (from microns to centimeters) during the fabrication process without complex sample preparation. [ 24 ] For example, photoluminescence measurements of solar cell absorbers can predict the maximum voltage the material could produce. [ 25 ] In chemistry, the method is more often referred to as fluorescence spectroscopy , but the instrumentation is the same. The relaxation processes can be studied using time-resolved fluorescence spectroscopy to find the decay lifetime of the photoluminescence. These techniques can be combined with microscopy, to map the intensity ( confocal microscopy ) or the lifetime ( fluorescence-lifetime imaging microscopy ) of the photoluminescence across a sample (e.g. a semiconducting wafer, or a biological sample that has been marked with fluorescent molecules). Modulated photoluminescence is a specific method for measuring the complex frequency response of the photoluminescence signal to a sinusoidal excitation, allowing for the direct extraction of minority carrier lifetime without the need for intensity calibrations. It has been used to study the influence of interface defects on the recombination of excess carriers in crystalline silicon wafers with different passivation schemes. [ 26 ] | https://en.wikipedia.org/wiki/Photoluminescence |
Photoluminescence excitation (abbreviated PLE ) is a specific type of photoluminescence and concerns the interaction between electromagnetic radiation and matter . It is used in spectroscopic measurements where the frequency of the excitation light is varied, and the luminescence is monitored at the typical emission frequency of the material being studied. Peaks in the PLE spectra often represent absorption lines of the material. PLE spectroscopy is a useful method to investigate the electronic level structure of materials with low absorption [ 1 ] due to the superior signal-to-noise ratio of the method compared to absorption measurements.
In a quantum-mechanical description of matter, the electrons confined to a material (such as those in individual atoms, molecules or crystals) are limited to a discrete set of energy values. The ground state of such a material system is such that the most energetic electron has its minimal energy. In photoluminescence, energy is transferred from light incident on the material and absorbed to electrons. The light is absorbed in minimal "quanta" or "packets" of energy of the electromagnetic radiation called photons . The amount of energy carried by a photon is proportional to its frequency. The electron is then in an excited state of higher energy. Such states are not stable and with time the material system will return to its ground state and the electron will lose its energy. Luminescence is the process whereby light is emitted when the electron drops to a lower energy level.
Often when a photon is absorbed, the system is excited in the corresponding excited state, then it relaxes in an intermediate lower energy state, with a "non-radiative relaxation" (a relaxation that doesn't involve the emission of a photon, but e.g. involves the emission of vibrational energy) and then there is the emission of a photon with a lower energy than the absorbed one, because of the relaxation from the intermediate, lower energy state to the "ground state". Usually the strongest luminescence of the material is from the lower levels to the ground state. This process is called fluorescence . For instance, in semiconductors , most of the light emitted is at the frequency corresponding to the bandgap energy, i.e. from the bottom of the conduction band to the top of the valence band. In such systems, more light absorbed by the material, results in more electrons decaying non-radiatively to the lower states, and more luminescence in the emission wavelength.
This quantum mechanics -related article is a stub . You can help Wikipedia by expanding it .
This spectroscopy -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Photoluminescence_excitation |
Photomagnetism ( photomagnetic effect ) is the effect in which a material acquires (and in some cases loses) its ferromagnetic properties in response to light. The current model for this phenomenon is a light-induced electron transfer , accompanied by the reversal of the spin direction of an electron . This leads to an increase in spin concentration, causing the magnetic transition. [ 1 ] Currently the effect is only observed to persist (for any significant time) at very low temperature. But at temperatures such as 5K, the effect may persist for several days. [ 1 ]
The magnetisation and demagnetisation (where not demagnetised thermally) occur through intermediate states [ 2 ] as shown (right). The magnetising and demagnetising wavelengths provide the energy for the system to reach the intermediate states which then relaxed non-radiatively to one of the two states (the intermediate state for magnetisation and demagnetisation are different and so the photon flux is not wasted by relaxation to the same state from which the system was just excited). A direct transition from the ground state to the magnetic state and, more importantly, vice versa is a forbidden transition , and this leads to the magnetised state being metastable and persisting for a long period at low temperatures.
One of the most promising groups of molecular photomagnetic materials are Co-Fe Prussian blue analogues (i.e. compounds with the same structure and similar chemical make up to Prussian blue.) A Prussian blue analogue has a chemical formula M 1-2x Co 1+x [Fe(CN) 6 ]•zH 2 O where x and z are variables (z may be zero) and M is an alkali metal. Prussian blue analogues have a face centre cubic structure.
It is essential that the structure be non-stoichiometric . [ 3 ] In this case the iron molecules are randomly replaced by water (6 molecules of water per replaced iron). This non-stoichiometry is essential to the photomagnetism of Prussian blue analogues as regions which contain an iron vacancy are more stable in the non-magnetic state and regions without a vacancy are more stable in the magnetic state. By illumination by the correct frequency one or another of these regions can be locally changed to its more stable state from the bulk state, triggering the phase change of the entire molecule. The reverse phase change can be accomplished by exciting the other type of region by the appropriate frequency. | https://en.wikipedia.org/wiki/Photomagnetism |
A photomask (also simply called a mask ) is an opaque plate with transparent areas that allow light to shine through in a defined pattern. Photomasks are commonly used in photolithography for the production of integrated circuits (ICs or "chips") to produce a pattern on a thin wafer of material (usually silicon ). In semiconductor manufacturing, a mask is sometimes called a reticle . [ 1 ] [ 2 ]
In photolithography, several masks are used in turn, each one reproducing a layer of the completed design, and together known as a mask set . A curvilinear photomask has patterns with curves, which is a departure from conventional photomasks which only have patterns that are completely vertical or horizontal, known as manhattan geometry. These photomasks require special equipment to manufacture. [ 3 ]
For IC production in the 1960s and early 1970s, an opaque rubylith film laminated onto a transparent mylar sheet was used. The design of one layer was cut into the rubylith, initially by hand on an illuminated drafting table (later by machine ( plotter )) and the unwanted rubylith was peeled off by hand, forming the master image of that layer of the chip, often called "artwork". Increasingly complex and thus larger chips required larger and larger rubyliths, eventually even filling the wall of a room, and artworks were to be photographically reduced to produce photomasks (Eventually this whole process was replaced by the optical pattern generator to produce the master image). At this point the master image could be arrayed into a multi-chip image called a reticle . The reticle was originally a 10X larger image of a single chip.
The reticle was, by step-and-repeater photolithography and etching, used to produce a photomask with an image size the same as the final chip. The photomask might be used directly in the fab or be used as a master-photomask to produce the final actual working photomasks.
As feature size shrank, the only way to properly focus the image was to place it in direct contact with the wafer. These contact aligners often lifted some of the photoresist off the wafer and onto the photomask and it had to be cleaned or discarded. This drove the adoption of reverse master photomasks (see above), which were used to produce (with contact photolithography and etching) the needed many actual working photomasks. Later, projection photo-lithography meant photomask lifetime was indefinite. Still later direct-step-on-wafer stepper photo-lithography used reticles directly and ended the use of photomasks.
Photomask materials changed over time. Initially soda glass [ 4 ] was used with silver halide opacity. Later borosilicate [ 5 ] and then fused silica to control expansion, and chromium which has better opacity to ultraviolet light were introduced. The original pattern generators have since been replaced by electron beam lithography and laser -driven mask writer or maskless lithography systems which generate reticles directly from the original computerized design.
Lithographic photomasks are typically transparent fused silica plates covered with a pattern defined with a chromium (Cr) or Fe 2 O 3 metal absorbing film. [ 6 ] Photomasks are used at wavelengths of 365 nm , 248 nm, and 193 nm. Photomasks have also been developed for other forms of radiation such as 157 nm, 13.5 nm ( EUV ), X-ray , electrons , and ions ; but these require entirely new materials for the substrate and the pattern film. [ 6 ]
A set of photomasks , each defining a pattern layer in integrated circuit fabrication , is fed into a photolithography stepper or scanner , and individually selected for exposure. In multi-patterning techniques, a photomask would correspond to a subset of the layer pattern.
Historically in photolithography for the mass production of integrated circuit devices, there was a distinction between the term photoreticle or simply reticle , and the term photomask . In the case of a photomask, there is a one-to-one correspondence between the mask pattern and the wafer pattern. The mask covered the entire surface of the wafer which was exposed in its entirety in one shot. This was the standard for the 1:1 mask aligners that were succeeded by steppers and scanners with reduction optics. [ 7 ] As used in steppers and scanners which use image projection, [ 8 ] the reticle commonly contains only one copy, also called one layer of the designed VLSI circuit. (However, some photolithography fabrications utilize reticles with more than one layer placed side by side onto the same mask, used as copies to create several identical integrated circuits from one photomask). In modern usage, the terms reticle and photomask are synonymous. [ 9 ]
In a modern stepper or scanner, the pattern in the photomask is projected and shrunk by four or five times onto the wafer surface. [ 10 ] To achieve complete wafer coverage, the wafer is repeatedly " stepped " from position to position under the optical column or the stepper lens until full exposure of the wafer is achieved. A photomask with several copies of the integrated circuit design is used to reduce the number of steppings required to expose the entire wafer, thus increasing productivity.
Features 150 nm or below in size generally require phase-shifting to enhance the image quality to acceptable values. This can be achieved in many ways. The two most common methods are to use an attenuated phase-shifting background film on the mask to increase the contrast of small intensity peaks, or to etch the exposed quartz so that the edge between the etched and unetched areas can be used to image nearly zero intensity. In the second case, unwanted edges would need to be trimmed out with another exposure. The former method is attenuated phase-shifting , and is often considered a weak enhancement, requiring special illumination for the most enhancement, while the latter method is known as alternating-aperture phase-shifting , and is the most popular strong enhancement technique.
As leading-edge semiconductor features shrink , photomask features that are 4× larger must inevitably shrink as well. This could pose challenges since the absorber film will need to become thinner, and hence less opaque. [ 11 ] A 2005 study by IMEC found that thinner absorbers degrade image contrast and therefore contribute to line-edge roughness, using state-of-the-art photolithography tools. [ 12 ] One possibility is to eliminate absorbers altogether and use "chromeless" masks, relying solely on phase-shifting for imaging. [ 13 ] [ 14 ]
The emergence of immersion lithography has a strong impact on photomask requirements. The commonly used attenuated phase-shifting mask is more sensitive to the higher incidence angles applied in "hyper-NA" lithography, due to the longer optical path through the patterned film. [ 15 ] During manufacturing, inspection using a special form of microscopy called CD-SEM (Critical-Dimension Scanning Electron Microscopy) is used to measure critical dimensions on photomasks which are the dimensions of the patterns on a photomask. [ 16 ]
EUV photomasks work by reflecting light, [ 17 ] which is achieved by using multiple alternating layers of molybdenum and silicon .
Leading-edge photomasks (pre-corrected) images of the final chip patterns are magnified by four times. This magnification factor has been a key benefit in reducing pattern sensitivity to imaging errors. However, as features continue to shrink, two trends come into play: the first is that the mask error factor begins to exceed one, i.e., the dimension error on the wafer may be more than 1/4 the dimension error on the mask, [ 18 ] and the second is that the mask feature is becoming smaller, and the dimension tolerance is approaching a few nanometers. For example, a 25 nm wafer pattern should correspond to a 100 nm mask pattern, but the wafer tolerance could be 1.25 nm (5% spec), which translates into 5 nm on the photomask. The variation of electron beam scattering in directly writing the photomask pattern can easily well exceed this. [ 19 ] [ 20 ]
The term "pellicle" is used to mean "film", "thin film", or "membrane." Beginning in the 1960s, thin film stretched on a metal frame, also known as a "pellicle", was used as a beam splitter for optical instruments. It has been used in a number of instruments to split a beam of light without causing an optical path shift due to its small film thickness. In 1978, Shea et al. at IBM patented a process to use the "pellicle" as a dust cover to protect a photomask or reticle. In the context of this entry, "pellicle" means "thin film dust cover to protect a photomask".
Particle contamination can be a significant problem in semiconductor manufacturing. A photomask is protected from particles by a pellicle – a thin transparent film stretched over a frame that is glued over one side of the photomask. The pellicle is far enough away from the mask patterns so that moderate-to-small sized particles that land on the pellicle will be too far out of focus to print. Although they are designed to keep particles away, pellicles become a part of the imaging system and their optical properties need to be taken into account. Pellicles material are nitrocellulose and made for various transmission wavelengths. Current pellicles are made from polysilicon, and companies are exploring other materials for high-NA EUV and future chip making processes. [ 21 ] [ 22 ]
The SPIE Annual Conference, Photomask Technology reports the SEMATECH Mask Industry Assessment which includes current industry analysis and the results of their annual photomask manufacturers survey.
The following companies are listed in order of their global market share (2009 info): [ 23 ]
Major chipmakers such as Intel , Globalfoundries , IBM , NEC , TSMC , UMC , Samsung , and Micron Technology , have their own large maskmaking facilities or joint ventures with the abovementioned companies.
The worldwide photomask market was estimated as $3.2 billion in 2012 [ 24 ] and $3.1 billion in 2013. Almost half of the market was from captive mask shops (in-house mask shops of major chipmakers). [ 25 ]
The costs of creating new mask shop for 180 nm processes were estimated in 2005 as $40 million, and for 130 nm - more than $100 million. [ 26 ]
The purchase price of a photomask, in 2006, could range from $250 to $100,000 [ 27 ] for a single high-end phase-shift mask . As many as 30 masks (of varying price) may be required to form a complete mask set. As modern chips are built in several layers stacked on top of each other, at least one mask is required for each of these layers. | https://en.wikipedia.org/wiki/Photomask |
Photomath is an educational technology mobile app, owned by Google . It features a computer algebra system with an augmented optical character recognition system, designed for use with a smartphone's camera to scan and recognize mathematical equations; the app then displays step-by-step explanations onscreen. [ 4 ]
The app is based on a text recognition engine developed by Microblink, a company based in London and Croatia and led by founder Damir Sabol, which also includes the developers of both Photomath and Photopay. [ 5 ] [ 6 ] Photomath LLC was legally registered in San Mateo, California . In 2021, Photomath announced $23 million in Series B funding led by Menlo Ventures , [ 7 ] [ 8 ] with contributions from GSV Ventures, Learn Capital, Cherubic Ventures, and Goodwater Capital. [ 9 ]
In May 2022, Google announced it would acquire the company for an undisclosed amount. After review by the European Commission , the deal received approval in March 2023 [ 10 ] and concluded in June. This takeover represented the largest startup acquisition in Croatian history, with Photomath being the nation's leading app at that time. This acquisition was cited as a strategic move by Google in response to ChatGPT . [ 11 ] Upon Photomath's dissolution, Sabol transitioned to the role of Director of Software Engineering at Google. [ 12 ] As of February 29, 2024, Google has integrated the app into its Play Store publisher portfolio. [ 13 ]
Photomath utilizes the camera of a user's smartphone or tablet to scan and identify mathematical problems. [ 4 ] Upon recognition, the app displays the steps to solve the problem. The app presents these steps through various methods and approaches, elucidating the problem-solving process in a step-by-step manner to educate users.
Starting in 2016, the app expanded its capabilities to include handwriting recognition, alongside printed text, allowing students to scan both textbooks and handwritten mathematical notes. [ 14 ] [ 15 ]
In 2017, Photomath was recognized by The Tech Edvocate as one of the top 20 teaching and learning applications. [ 16 ] [ 17 ]
While Photomath is predominantly free, it also provides a subscription-based service, ‘Photomath Plus’, which enhances functionality with features like solving mathematical word problems and providing solutions to textbook exercises. [ 18 ] [ 19 ] [ 20 ]
As of 2021, Photomath boasts over 220 million downloads globally, with its official website reporting the resolution of 2.2 billion problems monthly and adoption by over 1 million educators. [ 21 ] [ 22 ] | https://en.wikipedia.org/wiki/Photomath |
Photomechanical effect is the change in the shape of a material when it is exposed to light . This effect was first documented by Alexander Graham Bell in 1880. [ 1 ] Kenji Uchino demonstrated that a photostrictive material could be used for legs in the construction of a miniature optically-powered "walker". [ 2 ]
The most common mechanism of photomechanical effect is light-induced heating.
Photomechanical materials may be considered smart materials due to their natural change implemented by external factors.
This article about materials science is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Photomechanical_effect |
Photometric parallax is a means to infer the distances of stars using their colours and apparent brightnesses. It was used by the Sloan Digital Sky Survey to discover the Virgo super star cluster .
Assuming that a star is on the main sequence, the star's absolute magnitude can be determined based on its color. Once the absolute and apparent magnitudes are known, the distance to the star can be determined by using the distance modulus . It does not actually employ any measurements of parallax and can be considered a misnomer.
Unlike the stellar parallax method, the photometric parallax method can be used to estimate the distances of stars over 10 kpc away, at the expense of much more limited accuracy for individual measurements.
This astronomy -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Photometric_parallax |
In astronomy , photometry , from Greek photo- ("light") and -metry ("measure"), is a technique used in astronomy that is concerned with measuring the flux or intensity of light radiated by astronomical objects . [ 1 ] This light is measured through a telescope using a photometer , often made using electronic devices such as a CCD photometer or a photoelectric photometer that converts light into an electric current by the photoelectric effect . When calibrated against standard stars (or other light sources) of known intensity and colour, photometers can measure the brightness or apparent magnitude of celestial objects.
The methods used to perform photometry depend on the wavelength region under study. At its most basic, photometry is conducted by gathering light and passing it through specialized photometric optical bandpass filters , and then capturing and recording the light energy with a photosensitive instrument. Standard sets of passbands (called a photometric system ) are defined to allow accurate comparison of observations. [ 2 ] A more advanced technique is spectrophotometry that is measured with a spectrophotometer and observes both the amount of radiation and its detailed spectral distribution . [ 3 ]
Photometry is also used in the observation of variable stars , [ 4 ] by various techniques such as, differential photometry that simultaneously measures the brightness of a target object and nearby stars in the starfield [ 5 ] or relative photometry by comparing the brightness of the target object to stars with known fixed magnitudes. [ 6 ] Using multiple bandpass filters with relative photometry is termed absolute photometry . A plot of magnitude against time produces a light curve , yielding considerable information about the physical process causing the brightness changes. [ 7 ] Precision photoelectric photometers can measure starlight around 0.001 magnitude. [ 8 ]
The technique of surface photometry can also be used with extended objects like planets , comets , nebulae or galaxies that measures the apparent magnitude in terms of magnitudes per square arcsecond. [ 9 ] Knowing the area of the object and the average intensity of light across the astronomical object determines the surface brightness in terms of magnitudes per square arcsecond, while integrating the total light of the extended object can then calculate brightness in terms of its total magnitude, energy output or luminosity per unit surface area.
Astronomy was among the earliest applications of photometry. Modern photometers use specialised standard passband filters across the ultraviolet , visible , and infrared wavelengths of the electromagnetic spectrum . [ 4 ] Any adopted set of filters with known light transmission properties is called a photometric system , and allows the establishment of particular properties about stars and other types of astronomical objects. [ 10 ] Several important systems are regularly used, such as the UBV system [ 11 ] (or the extended UBVRI system [ 12 ] ), near infrared JHK [ 13 ] or the Strömgren uvbyβ system . [ 10 ]
Historically, photometry in the near- infrared through short-wavelength ultra-violet was done with a photoelectric photometer, an instrument that measured the light intensity of a single object by directing its light onto a photosensitive cell like a photomultiplier tube . [ 4 ] These have largely been replaced with CCD cameras that can simultaneously image multiple objects, although photoelectric photometers are still used in special situations, [ 14 ] such as where fine time resolution is required. [ 15 ]
Modern photometric methods define magnitudes and colours of astronomical objects using electronic photometers viewed through standard coloured bandpass filters. This differs from other expressions of apparent visual magnitude [ 7 ] observed by the human eye or obtained by photography: [ 4 ] that usually appear in older astronomical texts and catalogues.
Magnitudes measured by photometers in some commonplace photometric systems (UBV, UBVRI or JHK) are expressed with a capital letter, such as "V" (m V ) or "B" (m B ). Other magnitudes estimated by the human eye are expressed using lower case letters, such as "v", "b" or "p", etc. [ 16 ] E.g. Visual magnitudes as m v , [ 17 ] while photographic magnitudes are m ph / m p or photovisual magnitudes m p or m pv . [ 17 ] [ 4 ] Hence, a 6th magnitude star might be stated as 6.0V, 6.0B, 6.0v or 6.0p. Because starlight is measured over a different range of wavelengths across the electromagnetic spectrum and are affected by different instrumental photometric sensitivities to light, they are not necessarily equivalent in numerical value. [ 16 ] For example, apparent magnitude in the UBV system for the solar-like star 51 Pegasi [ 18 ] is 5.46V, 6.16B or 6.39U, [ 19 ] corresponding to magnitudes observed through each of the visual 'V', blue 'B' or ultraviolet 'U' filters.
Magnitude differences between filters indicate colour differences and are related to temperature. [ 20 ] Using B and V filters in the UBV system produces the B–V colour index. [ 20 ] For 51 Pegasi , the B–V = 6.16 – 5.46 = +0.70, suggesting a yellow coloured star that agrees with its G2IV spectral type. [ 21 ] [ 19 ] Knowing the B–V results determines the star's surface temperature, [ 22 ] finding an effective surface temperature of 5768±8 K. [ 23 ]
Another important application of colour indices is graphically plotting star's apparent magnitude against the B–V colour index. This forms the important relationships found between sets of stars in colour–magnitude diagrams , which for stars is the observed version of the Hertzsprung-Russell diagram . Typically photometric measurements of multiple objects obtained through two filters will show, for example in an open cluster , [ 24 ] the comparative stellar evolution between the component stars or to determine the cluster's relative age. [ 25 ]
Due to the large number of different photometric systems adopted by astronomers, there are many expressions of magnitudes and their indices. [ 10 ] Each of these newer photometric systems, excluding UBV, UBVRI or JHK systems, assigns an upper or lower case letter to the filter used. For example, magnitudes used by Gaia are 'G' [ 26 ] (with the blue and red photometric filters, G BP and G RP [ 27 ] ) or the Strömgren photometric system having lower case letters of 'u', 'v', 'b', 'y', and two narrow and wide 'β' ( Hydrogen-beta ) filters. [ 10 ] Some photometric systems also have certain advantages. For example, Strömgren photometry can be used to measure the effects of reddening and interstellar extinction . [ 28 ] Strömgren allows calculation of parameters from the b and y filters (colour index of b − y ) without the effects of reddening, as the indices m 1 and c 1 . [ 28 ]
There are many astronomical applications used with photometric systems. Photometric measurements can be combined with the inverse-square law to determine the luminosity of an object if its distance can be determined, or its distance if its luminosity is known. Other physical properties of an object, such as its temperature or chemical composition, may also be determined via broad or narrow-band spectrophotometry.
Photometry is also used to study the light variations of objects such as variable stars , minor planets , active galactic nuclei and supernovae , [ 7 ] or to detect transiting extrasolar planets . Measurements of these variations can be used, for example, to determine the orbital period and the radii of the members of an eclipsing binary star system, the rotation period of a minor planet or a star, or the total energy output of supernovae. [ 7 ]
A CCD ( charge-coupled device ) camera is essentially a grid of photometers, simultaneously measuring and recording the photons coming from all the sources in the field of view. Because each CCD image records the photometry of multiple objects at once, various forms of photometric extraction can be performed on the recorded data; typically relative, absolute, and differential. All three will require the extraction of the raw image magnitude of the target object, and a known comparison object.
The observed signal from an object will typically cover many pixels according to the point spread function (PSF) of the system. This broadening is due to both the optics in the telescope and the astronomical seeing . When obtaining photometry from a point source , the flux is measured by summing all the light recorded from the object and subtracting the light due to the sky. [ 29 ] The simplest technique, known as aperture photometry, consists of summing the pixel counts within an aperture centered on the object and subtracting the product of the nearby average sky count per pixel and the number of pixels within the aperture. [ 29 ] [ 30 ] This will result in the raw flux value of the target object. When doing photometry in a very crowded field, such as a globular cluster , where the profiles of stars overlap significantly, one must use de-blending techniques, such as PSF fitting to determine the individual flux values of the overlapping sources. [ 31 ]
After determining the flux of an object in counts, the flux is normally converted into instrumental magnitude . Then, the measurement is calibrated in some way. Which calibrations are used will depend in part on what type of photometry is being done. Typically, observations are processed for relative or differential photometry. [ 32 ] Relative photometry is the measurement of the apparent brightness of multiple objects relative to each other. Absolute photometry is the measurement of the apparent brightness of an object on a standard photometric system ; these measurements can be compared with other absolute photometric measurements obtained with different telescopes or instruments. Differential photometry is the measurement of the difference in brightness of two objects. In most cases, differential photometry can be done with the highest precision , while absolute photometry is the most difficult to do with high precision. Also, accurate photometry is usually more difficult when the apparent brightness of the object is fainter.
To perform absolute photometry one must correct for differences between the effective passband through which an object is observed and the passband used to define the standard photometric system. This is often in addition to all of the other corrections discussed above. Typically this correction is done by observing the object(s) of interest through multiple filters and also observing a number of photometric standard stars . If the standard stars cannot be observed simultaneously with the target(s), this correction must be done under photometric conditions, when the sky is cloudless and the extinction is a simple function of the airmass .
To perform relative photometry, one compares the instrument magnitude of the object to a known comparison object, and then corrects the measurements for spatial variations in the sensitivity of the instrument and the atmospheric extinction. This is often in addition to correcting for their temporal variations, particularly when the objects being compared are too far apart on the sky to be observed simultaneously. [ 6 ] When doing the calibration from an image that contains both the target and comparison objects in close proximity, and using a photometric filter that matches the catalog magnitude of the comparison object most of the measurement variations decrease to null.
Differential photometry is the simplest of the calibrations and most useful for time series observations. [ 5 ] When using CCD photometry, both the target and comparison objects are observed at the same time, with the same filters, using the same instrument, and viewed through the same optical path. Most of the observational variables drop out and the differential magnitude is simply the difference between the instrument magnitude of the target object and the comparison object (∆Mag = C Mag – T Mag). This is very useful when plotting the change in magnitude over time of a target object, and is usually compiled into a light curve . [ 5 ]
For spatially extended objects such as galaxies , it is often of interest to measure the spatial distribution of brightness within the galaxy rather than simply measuring the galaxy's total brightness. An object's surface brightness is its brightness per unit solid angle as seen in projection on the sky, and measurement of surface brightness is known as surface photometry. [ 9 ] A common application would be measurement of a galaxy's surface brightness profile, meaning its surface brightness as a function of distance from the galaxy's center. For small solid angles, a useful unit of solid angle is the square arcsecond , and surface brightness is often expressed in magnitudes per square arcsecond. The diameter of galaxies are often defined by the size of the 25th magnitude isophote in the blue B-band. [ 33 ]
In forced photometry , measurements are conducted at a specified location rather than for a specified object . It is "forced" in the sense that a measurement can be taken even if there is no object visible (in the spectral band of interest) in the location being observed. Forced photometry allows extracting a magnitude, or an upper limit for the magnitude, at a chosen sky location. [ 34 ] [ 35 ] [ 36 ]
A number of free computer programs are available for synthetic aperture photometry and PSF-fitting photometry.
SExtractor [ 37 ] and Aperture Photometry Tool [ 38 ] are popular examples for aperture photometry. The former is geared towards reduction of large scale galaxy-survey data, and the latter has a graphical user interface (GUI) suitable for studying individual images.
DAOPHOT is recognized as the best software for PSF-fitting photometry. [ 31 ]
There are a number of organizations, from professional to amateur, that gather and share photometric data and make it available on-line. Some sites gather the data primarily as a resource for other researchers (ex. AAVSO) and some solicit contributions of data for their own research (ex. CBA): | https://en.wikipedia.org/wiki/Photometry_(astronomy) |
In physics , photon-induced electric field poling is a phenomenon whereby a pattern of local electric field orientations can be encoded in a suitable ferroelectric material , such as perovskite . The resulting encoded material is conceptually similar to the pattern of magnetic field orientations within the magnetic domains of a ferromagnet , and thus may be considered a possible technology for computer storage media. The encoded regions are optically active (have a varying index of refraction ) and thus may be "read out" optically.
The encoding process proceeds by application of ultraviolet light tuned to the absorption band associated with the transition of electrons from the valence band to the conduction band . During UV application, an external electric field is used to modify the electric dipole moment of regions of the ferroelectric material that are exposed to UV light. By this process, a pattern of local electric field orientations can be encoded.
Technically, the encoding effect proceeds by the creation of a population inversion between the valence and conduction bands, with the resulting creation of plasmons . During this time, ferroelectric perovskite materials can be forced to change geometry by the application of an electric field. The encoded regions become optically active due to the Pockels effect .
The pattern of ferroelectric domain orientations can be read out optically. The refractive index of the ferroelectric material at wavelengths from near- infrared through to near-ultraviolet is affected by the electric field within the material. A changing pattern of electric field domains within a ferroelectric substrate results in different regions of the substrate having different refractive indices. Under these conditions, the substrate behaves as a diffraction grating , allowing the pattern of domains to be inferred from the interference pattern present in the transmitted readout beam. | https://en.wikipedia.org/wiki/Photon-induced_electric_field_poling |
2. Measurement of the number and spatial distribution of photons produced in particle collisions
Photon Multiplicity Detector (PMD) is a detector used in the measurement of the multiplicity and spatial distribution of photons produced in nucleus - nucleus collisions. In short form, it is denoted by PMD. [ 1 ] [ 2 ] It was incorporated in the WA93 experiment . [ 2 ] The funding for research and development of the design of PMD was done by the Department of Atomic Energy (DAE) and the Department of Science and Technology (DST) of the Government of India . The detector was constructed in the collaboration of Variable Energy Cyclotron Centre in Kolkata , Institute of Physics in Bhubaneswar and group of universities at Chandigarh , Jaipur and Jammu . [ 3 ]
A PMD typically consists of two main layers Veto Detector and Preshower Detector. Veto Detector layer is designed to reject charged particles. Photons pass through a converter in Preshower Detector layer, initiating an electromagnetic shower. The detector then measures the number of cells activated by the shower, providing information about the photon's energy and position. [ 1 ]
At ALICE experiment , PMD was used to measure the multiplicity and pseudorapidity density distributions of inclusive photons at forward rapidity , spanning the range η = 2.3 to 3.9. The measurement was conducted using LHC Run 1 and 2 data by PMD in pp ( proton - proton collisions), pPb and Pbp collisions at a centre-of-mass energy of 5.02 TeV per nucleon pair. [ 4 ]
This nuclear technology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Photon_Multiplicity_Detector |
A photon bubble is a type of radiation-driven instability that can occur in the magnetized, radiation-supported gas surrounding neutron stars , black hole accretion disks or at the edge of ultra-compact HII regions around young, massive stars . [ 1 ] [ 2 ] [ 3 ] The instability occurs as follows. A compressive magnetohydrodynamical wave propagating at right angles to the direction of propagation of the radiation creates variations in the density of the gas. More radiation is able to pass through the low density regions than through the high density regions, and the imbalance in radiation pressure acts to drive gas out of the low density regions, along the magnetic field lines . This further decreases the density of the low density regions, which in turn allows more radiation to propagate through them, leading to runaway growth of the instability. [ 3 ]
This fluid dynamics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Photon_bubble |
Photon diffusion equation is a second order partial differential equation describing the time behavior of photon fluence rate distribution in a low-absorption high-scattering medium.
Its mathematical form is as follows. ∇ ( D ( r ) ⋅ ∇ ) Φ ( r → , t ) − v μ a ( r → ) Φ ( r → , t ) + v S ( r → , t ) = ∂ Φ ( r → , t ) ∂ t {\displaystyle \nabla (D(r)\cdot \nabla )\Phi ({\vec {r}},t)-v\mu _{a}({\vec {r}})\Phi ({\vec {r}},t)+vS({\vec {r}},t)={\frac {\partial \Phi ({\vec {r}},t)}{\partial t}}} where Φ {\displaystyle \Phi } is photon fluence rate (W/cm 2 ), ∇ {\displaystyle \nabla } is del operator, μ a {\displaystyle \mu _{a}} is absorption coefficient (cm −1 ), D {\displaystyle D} is diffusion constant , v {\displaystyle v} is the speed of light in the medium (m/s), and S {\displaystyle S} is an isotropic source term (W/cm 3 ).
Its main difference with diffusion equation in physics is that photon diffusion equation has an absorption term in it.
The properties of photon diffusion as explained by the equation is used in diffuse optical tomography . | https://en.wikipedia.org/wiki/Photon_diffusion_equation |
In physics, a photon gas is a gas -like collection of photons , which has many of the same properties of a conventional gas like hydrogen or neon – including pressure, temperature, and entropy. The most common example of a photon gas in equilibrium is the black-body radiation .
Photons are part of a family of particles known as bosons , particles that follow Bose–Einstein statistics and with integer spin . A gas of bosons with only one type of particle is uniquely described by three state functions such as the temperature , volume , and the number of particles . However, for a black body, the energy distribution is established by the interaction of the photons with matter, usually the walls of the container, and the number of photons is not conserved. As a result, the chemical potential of the black-body photon gas is zero at thermodynamic equilibrium. The number of state variables needed to describe a black-body state is thus reduced from three to two (e.g. temperature and volume).
In a classical ideal gas with massive particles, the energy of the particles is distributed according to a Maxwell–Boltzmann distribution . This distribution is established as the particles collide with each other, exchanging energy (and momentum) in the process. In a photon gas, there will also be an equilibrium distribution, but photons do not collide with each other (except under very extreme conditions, see two-photon physics ), so the equilibrium distribution must be established by other means. The most common way that an equilibrium distribution is established is by the interaction of the photons with matter. [ 1 ] If the photons are absorbed and emitted by the walls of the system containing the photon gas, and the walls are at a particular temperature, then the equilibrium distribution for the photons will be a black-body distribution at that temperature. [ 2 ]
A very important difference between a generic Bose gas (gas of massive bosons) and a photon gas with a black-body distribution is that the number of photons in the photon gas is not conserved. A photon can be created upon thermal excitation of an atom in the wall into an upper electronic state, followed by the emission of a photon when the atom falls back to a lower energetic state. This type of photon generation is called thermal emission. The reverse process can also take place, resulting in a photon being destroyed and removed from the gas. It can be shown that, as a result of such processes there is no constraint on the number of photons in the system, and the chemical potential of the photons must be zero for black-body radiation.
The thermodynamics of a black-body photon gas may be derived using quantum statistical mechanical arguments , with the radiation field being in equilibrium with the atoms in the wall. The derivation yields the spectral energy density u , which is the energy of the radiation field per unit volume per unit frequency interval, given by: [ 3 ] u ( ν , T ) = 8 π h ν 3 c 3 1 e h ν k T − 1 , {\displaystyle u(\nu ,T)={\frac {8\pi h\nu ^{3}}{c^{3}}}~{\frac {1}{e^{\frac {h\nu }{kT}}-1}},} where h is the Planck constant , c is the speed of light, ν is the frequency, k is the Boltzmann constant, and T is temperature.
Integrating over frequency and multiplying by the volume, V , gives the internal energy of a black-body photon gas: [ 4 ] U = ( 8 π 5 k 4 15 ( h c ) 3 ) V T 4 . {\displaystyle U=\left({\frac {8\pi ^{5}k^{4}}{15(hc)^{3}}}\right)VT^{4}.}
The derivation also yields the (expected) number of photons N : N = ( 16 π k 3 ζ ( 3 ) ( h c ) 3 ) V T 3 , {\displaystyle N=\left({\frac {16\pi k^{3}\zeta (3)}{(hc)^{3}}}\right)VT^{3},} where ζ ( n ) {\displaystyle \zeta (n)} is the Riemann zeta function . Note that for a particular temperature, the particle number N varies with the volume in a fixed manner, adjusting itself to have a constant density of photons.
If we note that the equation of state for an ultra-relativistic quantum gas (which inherently describes photons) is given by U = 3 P V , {\displaystyle U=3PV,} then we can combine the above formulas to produce an equation of state that looks much like that of an ideal gas: P V = ζ ( 4 ) ζ ( 3 ) N k T ≈ 0.9 N k T . {\displaystyle PV={\frac {\zeta (4)}{\zeta (3)}}NkT\approx 0.9\,NkT.}
The following table summarizes the thermodynamic state functions for a black-body photon gas. Notice that the pressure can be written in the form P = b T 4 {\displaystyle P=bT^{4}} , which is independent of volume ( b is a constant).
Within the table, ℏ {\displaystyle \hbar } refers to the reduced Planck constant , i.e., ℏ = h / 2 π {\displaystyle \hbar =h/2\pi } .
As an example of a thermodynamic process involving a photon gas, consider a cylinder with a movable piston. The interior walls of the cylinder are "black" in order that the temperature of the photons can be maintained at a particular temperature. This means that the space inside the cylinder will contain a blackbody-distributed photon gas. Unlike a massive gas, this gas will exist without the photons being introduced from the outside – the walls will provide the photons for the gas. Suppose the piston is pushed all the way into the cylinder so that there is an extremely small volume. The photon gas inside the volume will press against the piston, moving it outward, and in order for the transformation to be isothermic, a counter force of almost the same value will have to be applied to the piston so that the motion of the piston is very slow. This force will be equal to the pressure times the cross sectional area ( A ) of the piston. This process can be continued at a constant temperature until the photon gas is at a volume V 0 . Integrating the force over the distance ( x ) traveled yields the total work done to create this photon gas at this volume W = − ∫ 0 x 0 P ( A d x ) , {\displaystyle W=-\int _{0}^{x_{0}}P(A\mathrm {d} x),} where the relationship V = Ax has been used. Defining [ 4 ] b = 8 π 5 k 4 15 c 3 h 3 . {\displaystyle b={\frac {8\pi ^{5}k^{4}}{15c^{3}h^{3}}}.}
The pressure is P ( x ) = b T 4 3 . {\displaystyle P(x)={\frac {bT^{4}}{3}}\,.}
Integrating, the work done is just W = − b T 4 A x 0 3 = − b T 4 V 0 3 . {\displaystyle W=-{\frac {bT^{4}Ax_{0}}{3}}=-{\frac {bT^{4}V_{0}}{3}}.}
The amount of heat that must be added in order to create the gas is Q = U − W = H 0 . {\displaystyle Q=U-W=H_{0}\,.} where H 0 is the enthalpy at the end of the transformation. It is seen that the enthalpy is the amount of energy needed to create the photon gas.
In low-dimensional systems, for example in dye-solution filled optical microcavities with a distance between the resonator mirrors in the wavelength range where the situation becomes two-dimensional, also photon gases with tunable chemical potential can be realized. Such a photon gas in many respects behaves like a gas of material particles. One consequence of the tunable chemical potential is that at high phase space densities then Bose-Einstein condensation of photons is observed. [ 6 ] | https://en.wikipedia.org/wiki/Photon_gas |
Photon polarization is the quantum mechanical description of the classical polarized sinusoidal plane electromagnetic wave . An individual photon can be described as having right or left circular polarization , or a superposition of the two. Equivalently, a photon can be described as having horizontal or vertical linear polarization , or a superposition of the two.
The description of photon polarization contains many of the physical concepts and much of the mathematical machinery of more involved quantum descriptions, such as the quantum mechanics of an electron in a potential well. Polarization is an example of a qubit degree of freedom, which forms a fundamental basis for an understanding of more complicated quantum phenomena. Much of the mathematical machinery of quantum mechanics, such as state vectors , probability amplitudes , unitary operators , and Hermitian operators , emerge naturally from the classical Maxwell's equations in the description. The quantum polarization state vector for the photon, for instance, is identical with the Jones vector , usually used to describe the polarization of a classical wave . Unitary operators emerge from the classical requirement of the conservation of energy of a classical wave propagating through lossless media that alter the polarization state of the wave. Hermitian operators then follow for infinitesimal transformations of a classical polarization state.
Many of the implications of the mathematical machinery are easily verified experimentally. In fact, many of the experiments can be performed with polaroid sunglass lenses.
The connection with quantum mechanics is made through the identification of a minimum packet size, called a photon , for energy in the electromagnetic field. The identification is based on the theories of Planck and the interpretation of those theories by Einstein . The correspondence principle then allows the identification of momentum and angular momentum (called spin ), as well as energy, with the photon.
The wave is linearly polarized (or plane polarized) when the phase angles α x , α y {\displaystyle \alpha _{x}\,,\;\alpha _{y}} are equal , α x = α y = d e f α . {\displaystyle \alpha _{x}=\alpha _{y}\ {\stackrel {\mathrm {def} }{=}}\ \alpha .}
This represents a wave with phase α {\displaystyle \alpha } polarized at an angle θ {\displaystyle \theta } with respect to the x axis.
In this case the Jones vector | ψ ⟩ = ( cos θ exp ( i α x ) sin θ exp ( i α y ) ) {\displaystyle |\psi \rangle ={\begin{pmatrix}\cos \theta \exp \left(i\alpha _{x}\right)\\\sin \theta \exp \left(i\alpha _{y}\right)\end{pmatrix}}} can be written with a single phase: | ψ ⟩ = ( cos θ sin θ ) exp ( i α ) . {\displaystyle |\psi \rangle ={\begin{pmatrix}\cos \theta \\\sin \theta \end{pmatrix}}\exp \left(i\alpha \right).}
The state vectors for linear polarization in x or y are special cases of this state vector.
If unit vectors are defined such that | x ⟩ = d e f ( 1 0 ) {\displaystyle |x\rangle \ {\stackrel {\mathrm {def} }{=}}\ {\begin{pmatrix}1\\0\end{pmatrix}}} and | y ⟩ = d e f ( 0 1 ) {\displaystyle |y\rangle \ {\stackrel {\mathrm {def} }{=}}\ {\begin{pmatrix}0\\1\end{pmatrix}}} then the linearly polarized polarization state can be written in the "x–y basis" as | ψ ⟩ = cos θ exp ( i α ) | x ⟩ + sin θ exp ( i α ) | y ⟩ = ψ x | x ⟩ + ψ y | y ⟩ . {\displaystyle |\psi \rangle =\cos \theta \exp \left(i\alpha \right)|x\rangle +\sin \theta \exp \left(i\alpha \right)|y\rangle =\psi _{x}|x\rangle +\psi _{y}|y\rangle .}
If the phase angles α x {\displaystyle \alpha _{x}} and α y {\displaystyle \alpha _{y}} differ by exactly π / 2 {\displaystyle \pi /2} and the x amplitude equals the y amplitude the wave is circularly polarized . The Jones vector then becomes | ψ ⟩ = 1 2 ( 1 ± i ) exp ( i α x ) {\displaystyle |\psi \rangle ={\frac {1}{\sqrt {2}}}{\begin{pmatrix}1\\\pm i\end{pmatrix}}\exp \left(i\alpha _{x}\right)} where the plus sign indicates left circular polarization and the minus sign indicates right circular polarization. In the case of circular polarization, the electric field vector of constant magnitude rotates in the x–y plane.
If unit vectors are defined such that | R ⟩ = d e f 1 2 ( 1 i ) {\displaystyle |\mathrm {R} \rangle \ {\stackrel {\mathrm {def} }{=}}\ {1 \over {\sqrt {2}}}{\begin{pmatrix}1\\i\end{pmatrix}}} and | L ⟩ = d e f 1 2 ( 1 − i ) {\displaystyle |\mathrm {L} \rangle \ {\stackrel {\mathrm {def} }{=}}\ {1 \over {\sqrt {2}}}{\begin{pmatrix}1\\-i\end{pmatrix}}} then an arbitrary polarization state can be written in the "R–L basis" as | ψ ⟩ = ψ R | R ⟩ + ψ L | L ⟩ {\displaystyle |\psi \rangle =\psi _{\rm {R}}|\mathrm {R} \rangle +\psi _{\rm {L}}|\mathrm {L} \rangle } where ψ R = ⟨ R | ψ ⟩ = 1 2 ( cos θ exp ( i α x ) − i sin θ exp ( i α y ) ) {\displaystyle \psi _{\rm {R}}=\langle \mathrm {R} |\psi \rangle ={\frac {1}{\sqrt {2}}}\left(\cos \theta \exp(i\alpha _{x})-i\sin \theta \exp(i\alpha _{y})\right)} and ψ L = ⟨ L | ψ ⟩ = 1 2 ( cos θ exp ( i α x ) + i sin θ exp ( i α y ) ) . {\displaystyle \psi _{\rm {L}}=\langle \mathrm {L} |\psi \rangle ={\frac {1}{\sqrt {2}}}\left(\cos \theta \exp(i\alpha _{x})+i\sin \theta \exp(i\alpha _{y})\right).}
We can see that 1 = | ψ R | 2 + | ψ L | 2 . {\displaystyle 1=|\psi _{\rm {R}}|^{2}+|\psi _{\rm {L}}|^{2}.}
The general case in which the electric field rotates in the x–y plane and has variable magnitude is called elliptical polarization . The state vector is given by | ψ ⟩ = d e f ( ψ x ψ y ) = ( cos θ exp ( i α x ) sin θ exp ( i α y ) ) . {\displaystyle |\psi \rangle \ {\stackrel {\mathrm {def} }{=}}\ {\begin{pmatrix}\psi _{x}\\\psi _{y}\end{pmatrix}}={\begin{pmatrix}\cos \theta \exp \left(i\alpha _{x}\right)\\\sin \theta \exp \left(i\alpha _{y}\right)\end{pmatrix}}.}
To get an understanding of what a polarization state looks like, one can observe the orbit that is made if the polarization state is multiplied by a phase factor of e i ω t {\displaystyle e^{i\omega t}} and then having the real parts of its components interpreted as x and y coordinates respectively. That is: ( x ( t ) y ( t ) ) = ( ℜ ( e i ω t ψ x ) ℜ ( e i ω t ψ y ) ) = ℜ [ e i ω t ( ψ x ψ y ) ] = ℜ ( e i ω t | ψ ⟩ ) . {\displaystyle {\begin{pmatrix}x(t)\\y(t)\end{pmatrix}}={\begin{pmatrix}\Re (e^{i\omega t}\psi _{x})\\\Re (e^{i\omega t}\psi _{y})\end{pmatrix}}=\Re \left[e^{i\omega t}{\begin{pmatrix}\psi _{x}\\\psi _{y}\end{pmatrix}}\right]=\Re \left(e^{i\omega t}|\psi \rangle \right).}
If only the traced out shape and the direction of the rotation of ( x ( t ), y ( t )) is considered when interpreting the polarization state, i.e. only M ( | ψ ⟩ ) = { ( x ( t ) , y ( t ) ) | ∀ t } {\displaystyle M(|\psi \rangle )=\left.\left\{{\Big (}x(t),\,y(t){\Big )}\,\right|\,\forall \,t\right\}} (where x ( t ) and y ( t ) are defined as above) and whether it is overall more right circularly or left circularly polarized (i.e. whether | ψ R | > | ψ L | or vice versa), it can be seen that the physical interpretation will be the same even if the state is multiplied by an arbitrary phase factor, since M ( e i α | ψ ⟩ ) = M ( | ψ ⟩ ) , α ∈ R {\displaystyle M(e^{i\alpha }|\psi \rangle )=M(|\psi \rangle ),\ \alpha \in \mathbb {R} } and the direction of rotation will remain the same. In other words, there is no physical difference between two polarization states | ψ ⟩ {\displaystyle |\psi \rangle } and e i α | ψ ⟩ {\displaystyle e^{i\alpha }|\psi \rangle } , between which only a phase factor differs.
It can be seen that for a linearly polarized state, M will be a line in the xy plane, with length 2 and its middle in the origin, and whose slope equals to tan( θ ) . For a circularly polarized state, M will be a circle with radius 1/ √ 2 and with the middle in the origin.
The energy per unit volume in classical electromagnetic fields is (cgs units) and also Planck units: E c = 1 8 π [ E 2 ( r , t ) + B 2 ( r , t ) ] . {\displaystyle {\mathcal {E}}_{c}={\frac {1}{8\pi }}\left[\mathbf {E} ^{2}(\mathbf {r} ,t)+\mathbf {B} ^{2}(\mathbf {r} ,t)\right].}
For a plane wave, this becomes: E c = ∣ E ∣ 2 8 π {\displaystyle {\mathcal {E}}_{c}={\frac {\mid \mathbf {E} \mid ^{2}}{8\pi }}} where the energy has been averaged over a wavelength of the wave.
The fraction of energy in the x component of the plane wave is f x = | E | 2 cos 2 θ | E | 2 = ψ x ∗ ψ x = cos 2 θ {\displaystyle f_{x}={\frac {|\mathbf {E} |^{2}\cos ^{2}\theta }{\vert \mathbf {E} \vert ^{2}}}=\psi _{x}^{*}\psi _{x}=\cos ^{2}\theta } with a similar expression for the y component resulting in f y = sin 2 θ {\displaystyle f_{y}=\sin ^{2}\theta } .
The fraction in both components is ψ x ∗ ψ x + ψ y ∗ ψ y = ⟨ ψ | ψ ⟩ = 1. {\displaystyle \psi _{x}^{*}\psi _{x}+\psi _{y}^{*}\psi _{y}=\langle \psi |\psi \rangle =1.}
The momentum density is given by the Poynting vector P = 1 4 π c E ( r , t ) × B ( r , t ) . {\displaystyle {\boldsymbol {\mathcal {P}}}={1 \over 4\pi c}\mathbf {E} (\mathbf {r} ,t)\times \mathbf {B} (\mathbf {r} ,t).}
For a sinusoidal plane wave traveling in the z direction, the momentum is in the z direction and is related to the energy density: P z c = E c . {\displaystyle {\mathcal {P}}_{z}c={\mathcal {E}}_{c}.}
The momentum density has been averaged over a wavelength.
Electromagnetic waves can have both orbital and spin angular momentum. [ 1 ] The total angular momentum density is L = r × P = 1 4 π c r × [ E ( r , t ) × B ( r , t ) ] . {\displaystyle {\boldsymbol {\mathcal {L}}}=\mathbf {r} \times {\boldsymbol {\mathcal {P}}}={1 \over 4\pi c}\mathbf {r} \times \left[\mathbf {E} (\mathbf {r} ,t)\times \mathbf {B} (\mathbf {r} ,t)\right].}
For a sinusoidal plane wave propagating along z {\displaystyle z} axis the orbital angular momentum density vanishes. The spin angular momentum density is in the z {\displaystyle z} direction and is given by L = | E | 2 8 π ω ( | ⟨ R | ψ ⟩ | 2 − | ⟨ L | ψ ⟩ | 2 ) = 1 ω E c ( | ψ R | 2 − | ψ L | 2 ) {\displaystyle {\mathcal {L}}={{\vert \mathbf {E} \vert ^{2}} \over {8\pi \omega }}\left(\left\vert \langle \mathrm {R} |\psi \rangle \right\vert ^{2}-\left\vert \langle \mathrm {L} |\psi \rangle \right\vert ^{2}\right)={\frac {1}{\omega }}{\mathcal {E}}_{c}\left(\vert \psi _{\rm {R}}\vert ^{2}-\vert \psi _{\rm {L}}\vert ^{2}\right)} where again the density is averaged over a wavelength.
A linear filter transmits one component of a plane wave and absorbs the perpendicular component. In that case, if the filter is polarized in the x direction, the fraction of energy passing through the filter is f x = ψ x ∗ ψ x = cos 2 θ . {\displaystyle f_{x}=\psi _{x}^{*}\psi _{x}=\cos ^{2}\theta .\,}
An ideal birefringent crystal transforms the polarization state of an electromagnetic wave without loss of wave energy. Birefringent crystals therefore provide an ideal test bed for examining the conservative transformation of polarization states. Even though this treatment is still purely classical, standard quantum tools such as unitary and Hermitian operators that evolve the state in time naturally emerge.
A birefringent crystal is a material that has an optic axis with the property that the light has a different index of refraction for light polarized parallel to the axis than it has for light polarized perpendicular to the axis. Light polarized parallel to the axis are called " extraordinary rays " or " extraordinary photons ", while light polarized perpendicular to the axis are called " ordinary rays " or " ordinary photons ". If a linearly polarized wave impinges on the crystal, the extraordinary component of the wave will emerge from the crystal with a different phase than the ordinary component. In mathematical language, if the incident wave is linearly polarized at an angle t h e t a {\displaystyle theta} with respect to the optic axis, the incident state vector can be written | ψ ⟩ = ( cos θ sin θ ) {\displaystyle |\psi \rangle ={\begin{pmatrix}\cos \theta \\\sin \theta \end{pmatrix}}} and the state vector for the emerging wave can be written | ψ ′ ⟩ = ( cos θ exp ( i α x ) sin θ exp ( i α y ) ) = ( exp ( i α x ) 0 0 exp ( i α y ) ) ( cos θ sin θ ) = d e f U ^ | ψ ⟩ . {\displaystyle |\psi '\rangle ={\begin{pmatrix}\cos \theta \exp \left(i\alpha _{x}\right)\\\sin \theta \exp \left(i\alpha _{y}\right)\end{pmatrix}}={\begin{pmatrix}\exp \left(i\alpha _{x}\right)&0\\0&\exp \left(i\alpha _{y}\right)\end{pmatrix}}{\begin{pmatrix}\cos \theta \\\sin \theta \end{pmatrix}}\ {\stackrel {\mathrm {def} }{=}}\ {\hat {U}}|\psi \rangle .}
While the initial state was linearly polarized, the final state is elliptically polarized. The birefringent crystal alters the character of the polarization.
The initial polarization state is transformed into the final state with the operator U. The dual of the final state is given by ⟨ ψ ′ | = ⟨ ψ | U ^ † {\displaystyle \langle \psi '|=\langle \psi |{\hat {U}}^{\dagger }} where U † {\displaystyle U^{\dagger }} is the adjoint of U, the complex conjugate transpose of the matrix.
The fraction of energy that emerges from the crystal is ⟨ ψ ′ | ψ ′ ⟩ = ⟨ ψ | U ^ † U ^ | ψ ⟩ = ⟨ ψ | ψ ⟩ = 1. {\displaystyle \langle \psi '|\psi '\rangle =\langle \psi |{\hat {U}}^{\dagger }{\hat {U}}|\psi \rangle =\langle \psi |\psi \rangle =1.}
In this ideal case, all the energy impinging on the crystal emerges from the crystal. An operator U with the property that U ^ † U ^ = I , {\displaystyle {\hat {U}}^{\dagger }{\hat {U}}=I,} where I is the identity operator and U is called a unitary operator . The unitary property is necessary to ensure energy conservation in state transformations.
If the crystal is very thin, the final state will be only slightly different from the initial state. The unitary operator will be close to the identity operator. We can define the operator H by U ^ ≈ I + i H ^ {\displaystyle {\hat {U}}\approx I+i{\hat {H}}} and the adjoint by U ^ † ≈ I − i H ^ † . {\displaystyle {\hat {U}}^{\dagger }\approx I-i{\hat {H}}^{\dagger }.}
Energy conservation then requires
I = U ^ † U ^ ≈ ( I − i H ^ † ) ( I + i H ^ ) ≈ I − i H ^ † + i H ^ . {\displaystyle I={\hat {U}}^{\dagger }{\hat {U}}\approx \left(I-i{\hat {H}}^{\dagger }\right)\left(I+i{\hat {H}}\right)\approx I-i{\hat {H}}^{\dagger }+i{\hat {H}}.}
This requires that H ^ = H ^ † . {\displaystyle {\hat {H}}={\hat {H}}^{\dagger }.}
Operators like this that are equal to their adjoints are called Hermitian or self-adjoint.
The infinitesimal transition of the polarization state is | ψ ′ ⟩ − | ψ ⟩ = i H ^ | ψ ⟩ . {\displaystyle |\psi '\rangle -|\psi \rangle =i{\hat {H}}|\psi \rangle .}
Thus, energy conservation requires that infinitesimal transformations of a polarization state occur through the action of a Hermitian operator.
The treatment to this point has been classical . It is a testament, however, to the generality of Maxwell's equations for electrodynamics that the treatment can be made quantum mechanical with only a reinterpretation of classical quantities. The reinterpretation is based on the theories of Max Planck and the interpretation by Albert Einstein of those theories and of other experiments. [ citation needed ]
Einstein's conclusion from early experiments on the photoelectric effect is that electromagnetic radiation is composed of irreducible packets of energy, known as photons . The energy of each packet is related to the angular frequency of the wave by the relation ϵ = ℏ ω {\displaystyle \epsilon =\hbar \omega } where ℏ {\displaystyle \hbar } is an experimentally determined quantity known as the reduced Planck constant . If there are N {\displaystyle N} photons in a box of volume V {\displaystyle V} , the energy in the electromagnetic field is N ℏ ω {\displaystyle N\hbar \omega } and the energy density is N ℏ ω V {\displaystyle {N\hbar \omega \over V}}
The photon energy can be related to classical fields through the correspondence principle that states that for a large number of photons, the quantum and classical treatments must agree. Thus, for very large N {\displaystyle N} , the quantum energy density must be the same as the classical energy density N ℏ ω V = E c = | E | 2 8 π . {\displaystyle {N\hbar \omega \over V}={\mathcal {E}}_{c}={\frac {\vert \mathbf {E} \vert ^{2}}{8\pi }}.}
The number of photons in the box is then N = V 8 π ℏ ω | E | 2 . {\displaystyle N={\frac {V}{8\pi \hbar \omega }}\vert \mathbf {E} \vert ^{2}.}
The correspondence principle also determines the momentum and angular momentum of the photon. For momentum P z = N ℏ ω c V = N ℏ k z V {\displaystyle {\mathcal {P}}_{z}={N\hbar \omega \over cV}={N\hbar k_{z} \over V}} where k z {\displaystyle k_{z}} is the wave number. This implies that the momentum of a photon is p z = ℏ k z . {\displaystyle p_{z}=\hbar k_{z}.\,}
Similarly for the spin angular momentum L = 1 ω E c ( | ψ R | 2 − | ψ L | 2 ) = N ℏ V ( | ψ R | 2 − | ψ L | 2 ) {\displaystyle {\mathcal {L}}={\frac {1}{\omega }}{\mathcal {E}}_{c}\left(\vert \psi _{\rm {R}}\vert ^{2}-\vert \psi _{\rm {L}}\vert ^{2}\right)={\frac {N\hbar }{V}}\left(\vert \psi _{\rm {R}}\vert ^{2}-\vert \psi _{\rm {L}}\vert ^{2}\right)} where E c {\displaystyle {\mathcal {E}}_{c}} is field strength. This implies that the spin angular momentum of the photon is l z = ℏ ( | ψ R | 2 − | ψ L | 2 ) . {\displaystyle l_{z}=\hbar \left(\vert \psi _{\rm {R}}\vert ^{2}-\vert \psi _{\rm {L}}\vert ^{2}\right).} the quantum interpretation of this expression is that the photon has a probability of ∣ ψ R ∣ 2 {\displaystyle \mid \psi _{\rm {R}}\mid ^{2}} of having a spin angular momentum of ℏ {\displaystyle \hbar } and a probability of ∣ ψ L ∣ 2 {\displaystyle \mid \psi _{\rm {L}}\mid ^{2}} of having a spin angular momentum of − ℏ {\displaystyle -\hbar } . We can therefore think of the spin angular momentum of the photon being quantized as well as the energy. The angular momentum of classical light has been verified. [ 2 ] A photon that is linearly polarized (plane polarized) is in a superposition of equal amounts of the left-handed and right-handed states. Upon absorption by an electronic state, the angular momentum is "measured" and this superposition collapses into either right-hand or left-hand, corresponding to a raising or lowering of the angular momentum of the absorbing electronic state, respectively.
The spin of the photon is defined as the coefficient of ℏ {\displaystyle \hbar } in the spin angular momentum calculation. A photon has spin 1 if it is in the | R ⟩ {\displaystyle |R\rangle } state and −1 if it is in the | L ⟩ {\displaystyle |L\rangle } state. The spin operator is defined as the outer product S ^ = d e f | R ⟩ ⟨ R | − | L ⟩ ⟨ L | = ( 0 − i i 0 ) . {\displaystyle {\hat {S}}\ {\stackrel {\mathrm {def} }{=}}\ |\mathrm {R} \rangle \langle \mathrm {R} |-|\mathrm {L} \rangle \langle \mathrm {L} |={\begin{pmatrix}0&-i\\i&0\end{pmatrix}}.}
The eigenvectors of the spin operator are | R ⟩ {\displaystyle |\mathrm {R} \rangle } and | L ⟩ {\displaystyle |\mathrm {L} \rangle } with eigenvalues 1 and −1, respectively.
The expected value of a spin measurement on a photon is then ⟨ ψ | S ^ | ψ ⟩ = | ψ R | 2 − | ψ L | 2 . {\displaystyle \langle \psi |{\hat {S}}|\psi \rangle =\vert \psi _{\rm {R}}\vert ^{2}-\vert \psi _{\rm {L}}\vert ^{2}.}
An operator S has been associated with an observable quantity, the spin angular momentum. The eigenvalues of the operator are the allowed observable values. This has been demonstrated for spin angular momentum, but it is in general true for any observable quantity.
We can write the circularly polarized states as | s ⟩ {\displaystyle |s\rangle } where s = 1 for | R ⟩ {\displaystyle |\mathrm {R} \rangle } and s = −1 for | L ⟩ {\displaystyle |\mathrm {L} \rangle } . An arbitrary state can be written | ψ ⟩ = ∑ s = − 1 , 1 a s exp ( i α x − i s θ ) | s ⟩ {\displaystyle |\psi \rangle =\sum _{s=-1,1}a_{s}\exp \left(i\alpha _{x}-is\theta \right)|s\rangle } where α 1 {\displaystyle \alpha _{1}} and α − 1 {\displaystyle \alpha _{-1}} are phase angles, θ is the angle by which the frame of reference is rotated, and ∑ s = − 1 , 1 | a s | 2 = 1. {\displaystyle \sum _{s=-1,1}\vert a_{s}\vert ^{2}=1.}
When the state is written in spin notation, the spin operator can be written S ^ d → i ∂ ∂ θ {\displaystyle {\hat {S}}_{d}\rightarrow i{\partial \over \partial \theta }} S ^ d † → − i ∂ ∂ θ . {\displaystyle {\hat {S}}_{d}^{\dagger }\rightarrow -i{\partial \over \partial \theta }.}
The eigenvectors of the differential spin operator are exp ( i α x − i s θ ) | s ⟩ . {\displaystyle \exp \left(i\alpha _{x}-is\theta \right)|s\rangle .}
To see this, note S ^ d exp ( i α x − i s θ ) | s ⟩ → i ∂ ∂ θ exp ( i α x − i s θ ) | s ⟩ = s [ exp ( i α x − i s θ ) | s ⟩ ] . {\displaystyle {\hat {S}}_{d}\exp \left(i\alpha _{x}-is\theta \right)|s\rangle \rightarrow i{\partial \over \partial \theta }\exp \left(i\alpha _{x}-is\theta \right)|s\rangle =s\left[\exp \left(i\alpha _{x}-is\theta \right)|s\rangle \right].}
The spin angular momentum operator is l ^ z = ℏ S ^ d . {\displaystyle {\hat {l}}_{z}=\hbar {\hat {S}}_{d}.}
There are two ways in which probability can be applied to the behavior of photons; probability can be used to calculate the probable number of photons in a particular state, or probability can be used to calculate the likelihood of a single photon to be in a particular state. The former interpretation violates energy conservation. The latter interpretation is the viable, if nonintuitive, option. Dirac explains this in the context of the double-slit experiment :
Some time before the discovery of quantum mechanics people realized that the connection between light waves and photons must be of a statistical character. What they did not clearly realize, however, was that the wave function gives information about the probability of one photon being in a particular place and not the probable number of photons in that place. The importance of the distinction can be made clear in the following way. Suppose we have a beam of light consisting of a large number of photons split up into two components of equal intensity. On the assumption that the beam is connected with the probable number of photons in it, we should have half the total number going into each component. If the two components are now made to interfere, we should require a photon in one component to be able to interfere with one in the other. Sometimes these two photons would have to annihilate one another and other times they would have to produce four photons. This would contradict the conservation of energy. The new theory, which connects the wave function with probabilities for one photon gets over the difficulty by making each photon go partly into each of the two components. Each photon then interferes only with itself. Interference between two different photons never occurs. — Paul Dirac , The Principles of Quantum Mechanics, 1930, Chapter 1
The probability for a photon to be in a particular polarization state depends on the fields as calculated by the classical Maxwell's equations. The polarization state of the photon is proportional to the field. The probability itself is quadratic in the fields and consequently is also quadratic in the quantum state of polarization. In quantum mechanics, therefore, the state or probability amplitude contains the basic probability information. In general, the rules for combining probability amplitudes look very much like the classical rules for composition of probabilities: [The following quote is from Baym, Chapter 1] [ clarification needed ]
For any legal [ clarification needed ] operators the following inequality, a consequence of the Cauchy–Schwarz inequality , is true. 1 4 | ⟨ ( A ^ B ^ − B ^ A ^ ) x | x ⟩ | 2 ≤ ‖ A ^ x ‖ 2 ‖ B ^ x ‖ 2 . {\displaystyle {\frac {1}{4}}\left|\langle ({\hat {A}}{\hat {B}}-{\hat {B}}{\hat {A}})x|x\rangle \right|^{2}\leq \left\|{\hat {A}}x\right\|^{2}\left\|{\hat {B}}x\right\|^{2}.}
If B A ψ and A B ψ are defined, then by subtracting the means and re-inserting in the above formula, we deduce Δ ψ A ^ Δ ψ B ^ ≥ 1 2 | ⟨ [ A ^ , B ^ ] ⟩ ψ | {\displaystyle \Delta _{\psi }{\hat {A}}\,\Delta _{\psi }{\hat {B}}\geq {\frac {1}{2}}\left|\left\langle \left[{\hat {A}},{\hat {B}}\right]\right\rangle _{\psi }\right|} where ⟨ X ^ ⟩ ψ = ⟨ ψ | X ^ | ψ ⟩ {\displaystyle \left\langle {\hat {X}}\right\rangle _{\psi }=\left\langle \psi \right|{\hat {X}}\left|\psi \right\rangle } is the operator mean of observable X in the system state ψ and Δ ψ X ^ = ⟨ X ^ 2 ⟩ ψ − ⟨ X ^ ⟩ ψ 2 . {\displaystyle \Delta _{\psi }{\hat {X}}={\sqrt {\langle {\hat {X}}^{2}\rangle _{\psi }-\langle {\hat {X}}\rangle _{\psi }^{2}}}.}
Here [ A ^ , B ^ ] = d e f A ^ B ^ − B ^ A ^ {\displaystyle \left[{\hat {A}},{\hat {B}}\right]\ {\stackrel {\mathrm {def} }{=}}\ {\hat {A}}{\hat {B}}-{\hat {B}}{\hat {A}}} is called the commutator of A and B .
This is a purely mathematical result. No reference has been made to any physical quantity or principle. It simply states that the uncertainty of one operator times the uncertainty of another operator has a lower bound.
The connection to physics can be made if we identify the operators with physical operators such as the angular momentum and the polarization angle. We have then Δ ψ L ^ z Δ ψ θ ≥ ℏ 2 , {\displaystyle \Delta _{\psi }{\hat {L}}_{z}\,\Delta _{\psi }{\theta }\geq {\frac {\hbar }{2}},} which means that angular momentum and the polarization angle cannot be measured simultaneously with infinite accuracy. (The polarization angle can be measured by checking whether the photon can pass through a polarizing filter oriented at a particular angle, or a polarizing beam splitter . This results in a yes/no answer that, if the photon was plane-polarized at some other angle, depends on the difference between the two angles.)
Much of the mathematical apparatus of quantum mechanics appears in the classical description of a polarized sinusoidal electromagnetic wave. The Jones vector for a classical wave, for instance, is identical with the quantum polarization state vector for a photon. The right and left circular components of the Jones vector can be interpreted as probability amplitudes of spin states of the photon. Energy conservation requires that the states be transformed with a unitary operation. This implies that infinitesimal transformations are transformed with a Hermitian operator. These conclusions are a natural consequence of the structure of Maxwell's equations for classical waves.
Quantum mechanics enters the picture when observed quantities are measured and found to be discrete rather than continuous. The allowed observable values are determined by the eigenvalues of the operators associated with the observable. In the case angular momentum, for instance, the allowed observable values are the eigenvalues of the spin operator.
These concepts have emerged naturally from Maxwell's equations and Planck's and Einstein's theories. They have been found to be true for many other physical systems. In fact, the typical program is to assume the concepts of this section and then to infer the unknown dynamics of a physical system. This was done, for instance, with the dynamics of electrons. In that case, working back from the principles in this section, the quantum dynamics of particles were inferred, leading to Schrödinger's equation , a departure from Newtonian mechanics . The solution of this equation for atoms led to the explanation of the Balmer series for atomic spectra and consequently formed a basis for all of atomic physics and chemistry.
This is not the only occasion [ dubious – discuss ] in which Maxwell's equations have forced a restructuring of Newtonian mechanics. Maxwell's equations are relativistically consistent. Special relativity resulted from attempts to make classical mechanics consistent with Maxwell's equations (see, for example, Moving magnet and conductor problem ). | https://en.wikipedia.org/wiki/Photon_polarization |
The photon underproduction crisis is a cosmological discussion concerning the purported deficit between observed photons and predicted photons. [ 1 ] [ 2 ]
The deficit, or underproduction crisis, is a theoretical problem, arising from comparing observations of ultraviolet light emitted from known populations of galaxies and quasars to theoretical predictions of the amount of ultraviolet light require to simulate the observed distribution of the hydrogen gas in the local universe in a cosmological simulation. The distribution of hydrogen gas was inferred using Lyman-alpha forest observations from Hubble Space Telescope 's Cosmic Origins Spectrograph . [ 3 ] The amount of light from galaxies and quasars can be estimated from its effect on the distribution of hydrogen and helium in the regions between galaxies. Highly energetic ultraviolet photons can convert electrically neutral hydrogen gas into ionized gas.
A team led by Juna Kollmeier reported an unexpected deficit of roughly 400% between ionizing light from known sources and the actual observations of intergalactic hydrogen. Kollmeier and her team wrote in their scientific report, “We examine the statistics of the low-redshift Lyman-alpha forest from smoothed particle hydrodynamic simulations in light of recent improvements in the estimated evolution of the cosmic ultraviolet background (UVB) and recent observations from the Cosmic Origins Spectrograph (COS). We find that the value of the metagalactic photoionization rate required by our simulations to match the observed properties of the low-redshift Lyman-alpha forest is a factor of 5 larger than the value predicted by state of the art models for the evolution of this quantity.” [ 4 ] Cosmological simulations start at very high cosmological redshift z (such as z=100 or larger) and are evolved to z=0.
According to Benjamin D. Oppenheimer, who is one of the report's coauthors, “The simulations fit the data beautifully in the early universe, and they fit the local data beautifully if we're allowed to assume that this extra light is really there. It's possible the simulations do not reflect reality, which by itself would be a surprise, because intergalactic hydrogen is the component of the Universe that we think we understand the best.” [ 1 ] Kollmeier and her team state that "... either conventional sources of ionizing photons (galaxies and quasars) must contribute considerably more than current observational estimates or our theoretical understanding of the low-redshift universe is in need of substantial revision.” [ 4 ] A similar study, led by Michael Shull, found that the deficit is only twice as large and not five times larger, as previously claimed. [ 5 ]
A potential resolution to the photon underproduction crisis is presented by a series of recent papers. Khaire & Srianand [ 6 ] showed that a metagalactic photoionization rate that is two to five times larger can be easily obtained using updated quasar and galaxy observations. Recent observations of quasars indicate that the quasar contribution to ultraviolet photons is twice that of previous estimates. The revised galaxy contribution is also three times higher. Furthermore, the Kollmeier GADGET-2 simulations did not include heating from active galactic nuclei (AGN) feedback. Including AGN feedback was shown to be an important element for heating in the low redshift intergalactic medium (IGM) (Gurvich, Burkhart, & Bird 2016. [ 7 ] ). This implies that the low redshift COS data can be used to calibrate AGN feedback models in cosmological simulations. | https://en.wikipedia.org/wiki/Photon_underproduction_crisis |
A photonic crystal is an optical nanostructure in which the refractive index changes periodically. This affects the propagation of light in the same way that the structure of natural crystals gives rise to X-ray diffraction and that the atomic lattices (crystal structure) of semiconductors affect their conductivity of electrons . Photonic crystals occur in nature in the form of structural coloration and animal reflectors , and, as artificially produced, promise to be useful in a range of applications.
Photonic crystals can be fabricated for one, two, or three dimensions. One-dimensional photonic crystals can be made of thin film layers deposited on each other. Two-dimensional ones can be made by photolithography , or by drilling holes in a suitable substrate. Fabrication methods for three-dimensional ones include drilling under different angles, stacking multiple 2-D layers on top of each other, direct laser writing , or, for example, instigating self-assembly of spheres in a matrix and dissolving the spheres.
Photonic crystals can, in principle, find uses wherever light must be manipulated. For example, dielectric mirrors are one-dimensional photonic crystals which can produce ultra-high reflectivity mirrors at a specified wavelength. Two-dimensional photonic crystals called photonic-crystal fibers are used for fiber-optic communication , among other applications. Three-dimensional crystals may one day be used in optical computers , and could lead to more efficient photovoltaic cells . [ 3 ]
Although the energy of light (and all electromagnetic radiation ) is quantized in units called photons , the analysis of photonic crystals requires only classical physics . "Photonic" in the name is a reference to photonics , a modern designation for the study of light ( optics ) and optical engineering. Indeed, the first research into what we now call photonic crystals may have been as early as 1887 when the English physicist Lord Rayleigh experimented with periodic multi-layer dielectric stacks, showing they can effect a photonic band-gap in one dimension. Research interest grew with work in 1987 by Eli Yablonovitch and Sajeev John on periodic optical structures with more than one dimension—now called photonic crystals.
Photonic crystals are composed of periodic dielectric , metallo-dielectric—or even superconductor microstructures or nanostructures that affect electromagnetic wave propagation in the same way that the periodic potential in a semiconductor crystal affects the propagation of electrons , determining allowed and forbidden electronic energy bands . Photonic crystals contain regularly repeating regions of high and low refractive index . Light waves may propagate through this structure or propagation may be disallowed, depending on their wavelength. Wavelengths that may propagate in a given direction are called modes , and the ranges of wavelengths which propagate are called bands . Disallowed bands of wavelengths are called photonic band gaps . This gives rise to distinct optical phenomena, such as inhibition of spontaneous emission , [ 4 ] high-reflecting omni-directional mirrors, and low-loss- waveguiding . The bandgap of photonic crystals can be understood as the destructive interference of multiple reflections of light propagating in the crystal at each interface between layers of high- and low- refractive index regions, akin to the bandgaps of electrons in solids.
There are two strategies for opening up the complete photonic band gap. The first one is to increase the refractive index contrast for the band gap in each direction becomes wider and the second one is to make the Brillouin zone more similar to sphere. [ 5 ] However, the former is limited by the available technologies and materials and the latter is restricted by the crystallographic restriction theorem . For this reason, the photonic crystals with a complete band gap demonstrated to date have face-centered cubic lattice with the most spherical Brillouin zone and made of high-refractive-index semiconductor materials. Another approach is to exploit quasicrystalline structures with no crystallography limits. A complete photonic bandgap was reported for low-index polymer quasicrystalline samples manufactured by 3D printing. [ 6 ]
The periodicity of the photonic crystal structure must be around or greater than half the wavelength (in the medium) of the light waves in order for interference effects to be exhibited. Visible light ranges in wavelength between about 400 nm (violet) to about 700 nm (red) and the resulting wavelength inside a material requires dividing that by the average index of refraction . The repeating regions of high and low dielectric constant must, therefore, be fabricated at this scale. In one dimension, this is routinely accomplished using the techniques of thin-film deposition .
Photonic crystals have been studied in one form or another since 1887, but no one used the term photonic crystal until over 100 years later—after Eli Yablonovitch and Sajeev John published two milestone papers on photonic crystals in 1987. [ 4 ] [ 7 ] The early history is well-documented in the form of a story when it was identified as one of the landmark developments in physics by the American Physical Society . [ 8 ]
Before 1987, one-dimensional photonic crystals in the form of periodic multi-layer dielectric stacks (such as the Bragg mirror ) were studied extensively. Lord Rayleigh started their study in 1887, [ 9 ] by showing that such systems have a one-dimensional photonic band-gap, a spectral range of large reflectivity, known as a stop-band . Today, such structures are used in a diverse range of applications—from reflective coatings to enhancing LED efficiency to highly reflective mirrors in certain laser cavities (see, for example, VCSEL ). The pass-bands and stop-bands in photonic crystals were first reduced to practice by Melvin M. Weiner [ 10 ] who called those crystals "discrete phase-ordered media." Weiner achieved those results by extending Darwin's [ 11 ] dynamical theory for x-ray Bragg diffraction to arbitrary wavelengths, angles of incidence, and cases where the incident wavefront at a lattice plane is scattered appreciably in the forward-scattered direction. A detailed theoretical study of one-dimensional optical structures was performed by Vladimir P. Bykov , [ 12 ] who was the first to investigate the effect of a photonic band-gap on the spontaneous emission from atoms and molecules embedded within the photonic structure. Bykov also speculated as to what could happen if two- or three-dimensional periodic optical structures were used. [ 13 ] The concept of three-dimensional photonic crystals was then discussed by Ohtaka in 1979, [ 14 ] who also developed a formalism for the calculation of the photonic band structure. However, these ideas did not take off until after the publication of two milestone papers in 1987 by Yablonovitch and John. Both these papers concerned high-dimensional periodic optical structures, i.e., photonic crystals. Yablonovitch's main goal was to engineer photonic density of states to control the spontaneous emission of materials embedded in the photonic crystal. John's idea was to use photonic crystals to affect localisation and control of light.
After 1987, the number of research papers concerning photonic crystals began to grow exponentially. However, due to the difficulty of fabricating these structures at optical scales (see Fabrication challenges ), early studies were either theoretical or in the microwave regime, where photonic crystals can be built on the more accessible centimetre scale. (This fact is due to a property of the electromagnetic fields known as scale invariance. In essence, electromagnetic fields, as the solutions to Maxwell's equations , have no natural length scale—so solutions for centimetre scale structure at microwave frequencies are the same as for nanometre scale structures at optical frequencies.)
By 1991, Yablonovitch had demonstrated the first three-dimensional photonic band-gap in the microwave regime. [ 5 ] The structure that Yablonovitch was able to produce involved drilling an array of holes in a transparent material, where the holes of each layer form an inverse diamond structure – today it is known as Yablonovite .
In 1996, Thomas Krauss demonstrated a two-dimensional photonic crystal at optical wavelengths. [ 15 ] This opened the way to fabricate photonic crystals in semiconductor materials by borrowing methods from the semiconductor industry.
Pavel Cheben demonstrated a new type of photonic crystal waveguide – subwavelength grating (SWG) waveguide. [ 16 ] [ 17 ] The SWG waveguide operates in subwavelength region, away from the bandgap. It allows the waveguide properties to be controlled directly by the nanoscale engineering of the resulting metamaterial while mitigating wave interference effects. This provided “a missing degree of freedom in photonics” [ 18 ] and resolved an important limitation in silicon photonics which was its restricted set of available materials insufficient to achieve complex optical on-chip functions. [ 19 ] [ 20 ]
Today, such techniques use photonic crystal slabs, which are two dimensional photonic crystals "etched" into slabs of semiconductor. Total internal reflection confines light to the slab, and allows photonic crystal effects, such as engineering photonic dispersion in the slab. Researchers around the world are looking for ways to use photonic crystal slabs in integrated computer chips, to improve optical processing of communications—both on-chip and between chips. [ citation needed ]
Autocloning fabrication technique, proposed for infrared and visible range photonic crystals by Sato et al. in 2002, uses electron-beam lithography and dry etching : lithographically formed layers of periodic grooves are stacked by regulated sputter deposition and etching, resulting in "stationary corrugations" and periodicity. Titanium dioxide / silica and tantalum pentoxide /silica devices were produced, exploiting their dispersion characteristics and suitability to sputter deposition. [ 21 ]
Such techniques have yet to mature into commercial applications, but two-dimensional photonic crystals are commercially used in photonic crystal fibres [ 22 ] (otherwise known as holey fibres, because of the air holes that run through them). Photonic crystal fibres were first developed by Philip Russell in 1998, and can be designed to possess enhanced properties over (normal) optical fibres .
Study has proceeded more slowly in three-dimensional than in two-dimensional photonic crystals. This is because of more difficult fabrication. [ 22 ] Three-dimensional photonic crystal fabrication had no inheritable semiconductor industry techniques to draw on. Attempts have been made, however, to adapt some of the same techniques, and quite advanced examples have been demonstrated, [ 23 ] for example in the construction of "woodpile" structures constructed on a planar layer-by-layer basis. Another strand of research has tried to construct three-dimensional photonic structures from self-assembly —essentially letting a mixture of dielectric nanospheres settle from solution into three-dimensionally periodic structures that have photonic band-gaps. Vasily Astratov 's group from the Ioffe Institute realized in 1995 that natural and synthetic opals are photonic crystals with an incomplete bandgap. [ 24 ] The first demonstration of an "inverse opal" structure with a complete photonic bandgap came in 2000, from researchers at the University of Toronto , and Institute of Materials Science of Madrid (ICMM-CSIC), Spain. [ 25 ] The ever-expanding field of natural photonics, bioinspiration and biomimetics —the study of natural structures to better understand and use them in design—is also helping researchers in photonic crystals. [ 26 ] [ 27 ] [ 28 ] [ 29 ] For example, in 2006 a naturally occurring photonic crystal was discovered in the scales of a Brazilian beetle. [ 30 ] Analogously, in 2012 a diamond crystal structure was found in a weevil [ 31 ] [ 32 ] and a gyroid-type architecture in a butterfly. [ 33 ] More recently, gyroid photonic crystals have been found in the feather barbs of blue-winged leafbirds and are responsible for the bird's shimmery blue coloration. [ 34 ] Some publications suggest the feasibility of the complete photonic band gap in the visible range in photonic crystals with optically saturated media that can be implemented by using laser light as an external optical pump. [ 35 ]
The fabrication method depends on the number of dimensions that the photonic bandgap must exist in.
To produce a one-dimensional photonic crystal, thin film layers of different dielectric constant may be periodically deposited on a surface which leads to a band gap in a particular propagation direction (such as normal to the surface). A Bragg grating is an example of this type of photonic crystal. One-dimensional photonic crystals can include layers of non-linear optical materials in which the non-linear behaviour is accentuated due to field enhancement at wavelengths near a so-called degenerate band edge. This field enhancement (in terms of intensity) can reach N 2 {\displaystyle N^{2}} where N is the total number of layers. However, by using layers which include an optically anisotropic material, it has been shown that the field enhancement can reach N 4 {\displaystyle N^{4}} , which, in conjunction with non-linear optics, has potential applications such as in the development of an all- optical switch . [ 36 ]
A one-dimensional photonic crystal can be implemented using repeated alternating layers of a metamaterial and vacuum. [ 37 ] If the metamaterial is such that the relative permittivity and permeability follow the same wavelength dependence, then the photonic crystal behaves identically for TE and TM modes , that is, for both s and p polarizations of light incident at an angle.
Recently, researchers fabricated a graphene-based Bragg grating (one-dimensional photonic crystal) and demonstrated that it supports excitation of surface electromagnetic waves in the periodic structure by using 633 nm He-Ne laser as the light source. [ 38 ] Besides, a novel type of one-dimensional graphene-dielectric photonic crystal has also been proposed. This structure can act as a far-IR filter and can support low-loss surface plasmons for waveguide and sensing applications. [ 39 ] 1D photonic crystals doped with bio-active metals (i.e. silver ) have been also proposed as sensing devices for bacterial contaminants. [ 40 ] Similar planar 1D photonic crystals made of polymers have been used to detect volatile organic compounds vapors in atmosphere. [ 41 ] [ 42 ] In addition to solid-phase photonic crystals, some liquid crystals with defined ordering can demonstrate photonic color. [ 43 ] For example, studies have shown several liquid crystals with short- or long-range one-dimensional positional ordering can form photonic structures. [ 43 ]
In two dimensions, holes may be drilled in a substrate that is transparent to the wavelength of radiation that the bandgap is designed to block. Triangular and square lattices of holes have been successfully employed.
The Holey fiber or photonic crystal fiber can be made by taking cylindrical rods of glass in hexagonal lattice, and then heating and stretching them, the triangle-like airgaps between the glass rods become the holes that confine the modes.
There are several structure types that have been constructed: [ citation needed ]
]
Not only band gap, photonic crystals may have another effect if we partially remove the symmetry through the creation a nanosize cavity . This defect allows you to guide or to trap the light with the same function as nanophotonic resonator and it is characterized by the strong dielectric modulation in the photonic crystals. [ 50 ] For the waveguide, the propagation of light depends on the in-plane control provided by the photonic band gap and to the long confinement of light induced by dielectric mismatch. For the light trap, the light is strongly confined in the cavity resulting further interactions with the materials. First, if we put a pulse of light inside the cavity, it will be delayed by nano- or picoseconds and this is proportional to the quality factor of the cavity. Finally, if we put an emitter inside the cavity, the emission light also can be enhanced significantly and or even the resonant coupling can go through Rabi oscillation. This is related with cavity quantum electrodynamics and the interactions are defined by the weak and strong coupling of the emitter and the cavity. The first studies for the cavity in one-dimensional photonic slabs are usually in grating [ 51 ] or distributed feedback structures. [ 52 ] For two-dimensional photonic crystal cavities, [ 53 ] [ 54 ] [ 55 ] they are useful to make efficient photonic devices in telecommunication applications as they can provide very high quality factor up to millions with smaller-than-wavelength mode volume . For three-dimensional photonic crystal cavities, several methods have been developed including lithographic layer-by-layer approach, [ 56 ] surface ion beam lithography , [ 57 ] and micromanipulation technique. [ 58 ] All those mentioned photonic crystal cavities that tightly confine light offer very useful functionality for integrated photonic circuits, but it is challenging to produce them in a manner that allows them to be easily relocated. [ 59 ] There is no full control
with the cavity creation, the cavity location, and the emitter position relative to the maximum field of the cavity while the studies to solve those problems are still ongoing. Movable cavity of nanowire in photonic crystals is one of solutions to tailor this light matter interaction. [ 60 ]
Higher-dimensional photonic crystal fabrication faces two major challenges:
One promising fabrication method for two-dimensionally periodic photonic crystals is a photonic-crystal fiber, such as a holey fiber . Using fiber draw techniques developed for communications fiber it meets these two requirements, and photonic crystal fibres are commercially available. Another promising method for developing two-dimensional photonic crystals is the so-called photonic crystal slab. These structures consist of a slab of material—such as silicon —that can be patterned using techniques from the semiconductor industry. Such chips offer the potential to combine photonic processing with electronic processing on a single chip.
For three dimensional photonic crystals, various techniques have been used—including photolithography and etching techniques similar to those used for integrated circuits . [ 23 ] Some of these techniques are already commercially available. To avoid the complex machinery of nanotechnological methods , some alternate approaches involve growing photonic crystals from colloidal crystals as self-assembled structures.
Mass-scale 3D photonic crystal films and fibres can now be produced using a shear-assembly technique that stacks 200–300 nm colloidal polymer spheres into perfect films of fcc lattice. Because the particles have a softer transparent rubber coating, the films can be stretched and molded, tuning the photonic bandgaps and producing striking structural color effects.
The photonic band gap (PBG) is essentially the gap between the air-line and the dielectric-line in the dispersion relation of the PBG system. To design photonic crystal systems, it is essential to engineer the location and size of the bandgap by computational modeling using any of the following methods:
Essentially, these methods solve for the frequencies (normal modes) of the photonic crystal for each value of the propagation direction given by the wave vector, or vice versa. The various lines in the band structure, correspond to the different cases of n , the band index. For an introduction to photonic band structure, see K. Sakoda's [ 65 ] and Joannopoulos [ 50 ] books.
The plane wave expansion method can be used to calculate the band structure using an eigen formulation of the Maxwell's equations, and thus solving for the eigen frequencies for each of the propagation directions, of the wave vectors. It directly solves for the dispersion diagram. Electric field strength values can also be calculated over the spatial domain of the problem using the eigen vectors of the same problem. For the picture shown to the right, corresponds to the band-structure of a 1D distributed Bragg reflector ( DBR ) with air-core interleaved with a dielectric material of relative permittivity 12.25, and a lattice period to air-core thickness ratio (d/a) of 0.8, is solved using 101 planewaves over the first irreducible Brillouin zone . The Inverse dispersion method also exploited plane wave expansion but formulates Maxwell's equation as an eigenproblem for the wave vector k while the frequency ω {\displaystyle \omega } is considered as a parameter. [ 62 ] Thus, it solves the dispersion relation k ( ω ) {\displaystyle k(\omega )} instead of ω ( k ) {\displaystyle \omega (k)} , which plane wave method does. The inverse dispersion method makes it possible to find complex value of the wave vector e.g. in the bandgap, which allows one to distinguish photonic crystals from metamaterial. Besides, the method is ready for the frequency dispersion of the permittivity to be taken into account.
To speed calculation of the frequency band structure, the Reduced Bloch Mode Expansion (RBME) method can be used. [ 66 ] The RBME method applies "on top" of any of the primary expansion methods mentioned above. For large unit cell models, the RBME method can reduce time for computing the band structure by up to two orders of magnitude.
Photonic crystals are attractive optical materials for controlling and manipulating light flow. One dimensional photonic crystals are already in widespread use, in the form of thin-film optics , with applications from low and high reflection coatings on lenses and mirrors to colour changing paints and inks . [ 67 ] [ 68 ] [ 47 ] Higher-dimensional photonic crystals are of great interest for both fundamental and applied research, and the two dimensional ones are beginning to find commercial applications.
The first commercial products involving two-dimensionally periodic photonic crystals are already available in the form of photonic-crystal fibers, which use a microscale structure to confine light with radically different characteristics compared to conventional optical fiber for applications in nonlinear devices and guiding exotic wavelengths. The three-dimensional counterparts are still far from commercialization but may offer additional features such as optical nonlinearity required for the operation of optical transistors used in optical computers , when some technological aspects such as manufacturability and principal difficulties such as disorder are under control. [ 69 ]
SWG photonic crystal waveguides have facilitated new integrated photonic devices for controlling transmission of light signals in photonic integrated circuits, including fibre-chip couplers, waveguide crossovers, wavelength and mode multiplexers, ultra-fast optical switches, athermal waveguides, biochemical sensors, polarization management circuits, broadband interference couplers, planar waveguide lenses, anisotropic waveguides, nanoantennas and optical phased arrays. [ 19 ] [ 70 ] [ 71 ] SWG nanophotonic couplers permit highly-efficient and polarization-independent coupling between photonic chips and external devices. [ 17 ] They have been adopted for fibre-chip coupling in volume optoelectronic chip manufacturing. [ 72 ] [ 73 ] [ 74 ] These coupling interfaces are particularly important because every photonic chip needs to be optically connected with the external world and the chips themselves appear in many established and emerging applications, such as 5G networks, data center interconnects, chip-to-chip interconnects, metro- and long-haul telecommunication systems, and automotive navigation.
In addition to the foregoing, photonic crystals have been proposed as platforms for the development of solar cells [ 75 ] and optical sensors, [ 76 ] including chemical sensors and biosensors. [ 77 ] [ 78 ] | https://en.wikipedia.org/wiki/Photonic_crystal |
A photonic metamaterial ( PM ), also known as an optical metamaterial , is a type of electromagnetic metamaterial , that interacts with light, covering terahertz ( THz ), infrared (IR) or visible wavelengths . [ 1 ] The materials employ a periodic , cellular structure.
The subwavelength periodicity [ 2 ] distinguishes photonic metamaterials from photonic band gap or photonic crystal structures. The cells are on a scale that is magnitudes larger than the atom, yet much smaller than the radiated wavelength, [ 3 ] [ 4 ] are on the order of nanometers . [ 3 ] [ 4 ] [ 5 ]
In a conventional material, the response to electric and magnetic fields, and hence to light , is determined by atoms . [ 6 ] [ 7 ] In metamaterials, cells take the role of atoms in a material that is homogeneous at scales larger than the cells, yielding an effective medium model . [ 3 ] [ 4 ] [ 8 ] [ 6 ] [ 9 ]
Some photonic metamaterials exhibit magnetism at high frequencies, resulting in strong magnetic coupling. This can produce a negative index of refraction in the optical range.
Potential applications include cloaking and transformation optics . [ 10 ]
Photonic crystals differ from PM in that the size and periodicity of their scattering elements are larger, on the order of the wavelength. Also, a photonic crystal is not homogeneous , so it is not possible to define values of ε ( permittivity ) or u ( permeability ). [ 11 ]
While researching whether or not matter interacts with the magnetic component of light, Victor Veselago (1967) envisioned the possibility of refraction with a negative sign, according to Maxwell's equations . A refractive index with a negative sign is the result of permittivity, ε < 0 (less than zero) and magnetic permeability, μ < 0 (less than zero). [ 5 ] [ 12 ] Veselago's analysis has been cited in over 1500 peer-reviewed articles and many books. [ 13 ] [ 14 ] [ 15 ] [ 16 ]
In the mid-1990s, metamaterials were first seen as potential technologies for applications such as nanometer-scale imaging and cloaking objects . For example, in 1995, Guerra [ 17 ] fabricated a transparent grating with 50 nm lines and spaces, and then coupled this (what would be later called) photonic metamaterial with an immersion objective to resolve a silicon grating having 50 nm lines and spaces, far beyond the diffraction limit for the 650 nm wavelength illumination in air. And in 2002, Guerra et al. [ 18 ] published their demonstrated use of subwavelength nano-optics (photonic metamaterials) for optical data storage at densities well above the diffraction limit. As of 2015, metamaterial antennas were commercially available. [ 19 ] [ 20 ]
Negative permeability was achieved with a split-ring resonator (SRR) as part of the subwavelength cell. The SRR achieved negative permeability within a narrow frequency range. This was combined with a symmetrically positioned electric conducting post, which created the first negative index metamaterial, operating in the microwave band. Experiments and simulations demonstrated the presence of a left-handed propagation band, a left-handed material. The first experimental confirmation of negative index of refraction occurred soon after, also at microwave frequencies. [ 5 ] [ 21 ] [ 22 ]
Natural materials , such as precious metals , can achieve ε < 0 up to the visible frequencies . However, at terahertz , infrared and visible frequencies, natural materials have a very weak magnetic coupling component, or permeability. In other words, susceptibility to the magnetic component of radiated light can be considered negligible. [ 12 ]
Negative index metamaterials behave contrary to the conventional "right-handed" interaction of light found in conventional optical materials. Hence, these are dubbed left-handed materials or negative index materials (NIMs), among other nomenclatures. [ 5 ] [ 21 ] [ 22 ]
Only fabricated NIMs exhibit this capability. Photonic crystals, like many other known systems, can exhibit unusual propagation behavior such as reversal of phase and group velocities . However, negative refraction does not occur in these systems. [ 21 ] [ 23 ] [ 24 ]
Naturally occurring ferromagnetic and antiferromagnetic materials can achieve magnetic resonance, but with significant losses. In natural materials such as natural magnets and ferrites , resonance for the electric (coupling) response and magnetic (coupling) response do not occur at the same frequency.
Photonic metamaterial SRRs have reached scales below 100 nanometers, using electron beam and nanolithography . One nanoscale SRR cell has three small metallic rods that are physically connected. This is configured as a U shape and functions as a nano-inductor . The gap between the tips of the U-shape function as a nano-capacitor . Hence, it is an optical nano-LC resonator . These "inclusions" create local electric and magnetic fields when externally excited. These inclusions are usually ten times smaller than the vacuum wavelength of the light c 0 at the resonant frequency. The inclusions can then be evaluated by using an effective medium approximation. [ 5 ] [ 13 ]
PMs display a magnetic response with useful magnitude at optical frequencies. This includes negative permeability, despite the absence of magnetic materials. Analogous to ordinary optical material, PMs can be treated as an effective medium that is characterized by effective medium parameters ε(ω) and μ(ω), or similarly, ε eff and μ eff . [ 13 ] [ 25 ]
The negative refractive index of PMs in the optical frequency range was experimentally demonstrated in 2005 by Shalaev et al. (at the telecom wavelength λ = 1.5 μm) [ 26 ] and by Brueck et al. (at λ = 2 μm) at nearly the same time. [ 27 ]
An effective (transmission) medium approximation describes material slabs that, when reacting to an external excitation , are "effectively" homogeneous, with corresponding "effective" parameters that include "effective" ε and μ and apply to the slab as a whole. Individual inclusions or cells may have values different from the slab. [ 28 ] [ 29 ] However, there are cases where the effective medium approximation does not hold [ 30 ] [ 31 ] and one needs to be aware of its applicability.
Negative magnetic permeability was originally achieved in a left-handed medium at microwave frequencies by using arrays of split-ring resonators. [ 32 ] In most natural materials, the magnetically coupled response starts to taper off at frequencies in the gigahertz range, which implies that significant magnetism does not occur at optical frequencies. The effective permeability of such materials is unity, μ eff = 1. Hence, the magnetic component of a radiated electromagnetic field has virtually no effect on natural occurring materials at optical frequencies. [ 33 ]
In metamaterials the cell acts as a meta-atom, a larger scale magnetic dipole , analogous to the picometer -sized atom. For meta-atoms constructed from gold , μ < 0 can be achieved at telecommunication frequencies but not at visible frequencies. The visible frequency has been elusive because the plasma frequency of metals is the ultimate limiting condition. [ 7 ]
Optical wavelengths are much shorter than microwaves, making subwavelength optical metamaterials more difficult to realize. Microwave metamaterials can be fabricated from circuit board materials, while lithography techniques must be employed to produce PMs.
Successful experiments used a periodic arrangement of short wires or metallic pieces with varied shapes. In a different study the whole slab was electrically connected.
Fabrication techniques include electron beam lithography , nanostructuring with a focused ion beam and interference lithography . [ 13 ] [ 34 ] [ 35 ] [ 36 ]
In 2014 a polarization -insensitive metamaterial prototype was demonstrated to absorb energy over a broad band (a super-octave ) of infrared wavelengths. The material displayed greater than 98% measured average absorptivity that it maintained over a wide ±45° field-of-view for mid-infrared wavelengths between 1.77 and 4.81 μm. One use is to conceal objects from infrared sensors. Palladium provided greater bandwidth than silver or gold. A genetic algorithm randomly modified an initial candidate pattern, testing and eliminating all but the best. The process was repeated over multiple generations until the design became effective. [ 37 ] [ 38 ]
The metamaterial is made of four layers on a silicon substrate. The first layer is palladium, covered by polyimide (plastic) and a palladium screen on top. The screen has sub-wavelength cutouts that block the various wavelengths. A polyimide layer caps the whole absorber. It can absorb 90 percent of infrared radiation at up to a 55 degree angle to the screen. The layers do not need accurate alignment. The polyimide cap protects the screen and helps reduce any impedance mismatch that might occur when the wave crosses from the air into the device. [ 38 ]
In 2015 visible light joined microwave and infrared NIMs in propagating light in only one direction. (" mirrors " instead reduce light transmission in the reverse direction, requiring low light levels behind the mirror to work.) [ 39 ]
The material combined two optical nanostructures: a multi-layered block of alternating silver and glass sheets and metal grates. The silver-glass structure is a "hyperbolic" metamaterial, which treats light differently depending on which direction the waves are traveling. Each layer is tens of nanometers thick—much thinner than visible light's 400 to 700 nm wavelengths, making the block opaque to visible light, although light entering at certain angles can propagate inside the material. [ 39 ]
Adding chromium grates with sub-wavelength spacings bent incoming red or green light waves enough that they could enter and propagate inside the block. On the opposite side of the block, another set of grates allowed light to exit, angled away from its original direction. The spacing of the exit grates was different from that of the entrance grates, bending incident light so that external light could not enter the block from that side. Around 30 times more light passed through in the forward direction than in reverse. The intervening blocks reduced the need for precise alignment of the two grates with respect to each other. [ 39 ]
Such structures hold potential for applications in optical communication—for instance, they could be integrated into photonic computer chips that split or combine signals carried by light waves. Other potential applications include biosensing using nanoscale particles to deflect light to angles steep enough to travel through the hyperbolic material and out the other side. [ 39 ]
By employing a combination of plasmonic and non-plasmonic nanoparticles , lumped circuit element nanocircuits at infrared and optical frequencies appear to be possible. Conventional lumped circuit elements are not available in a conventional way. [ 40 ]
Subwavelength lumped circuit elements proved workable in the microwave and radio frequency (RF) domain. The lumped element concept allowed for element simplification and circuit modularization. Nanoscale fabrication techniques exist to accomplish subwavelength geometries. [ 40 ]
Metals such as gold , silver , aluminum and copper conduct currents at RF and microwave frequencies. At optical frequencies characteristics of some noble metals are altered. Rather than normal current flow, plasmonic resonances occur as the real part of the complex permittivity becomes negative. Therefore, the main current flow is actually the electric displacement current density ∂D / ∂t, and can be termed as the “flowing optical current". [ 40 ]
At subwavelength scales the cell's impedance becomes dependent on shape, size , material and the optical frequency illumination. The particle's orientation with the optical electric field may also help determine the impedance. Conventional silicon dielectrics have the real permittivity component ε real > 0 at optical frequencies, causing the nanoparticle to act as a capacitive impedance, a nanocapacitor. Conversely, if the material is a noble metal such as gold or silver, with ε real < 0, then it takes on inductive characteristics, becoming a nanoinductor. Material loss is represented as a nano-resistor. [ 40 ] [ 41 ]
The most commonly applied scheme to achieve a tunable index of refraction is electro-optical tuning. Here the change in refractive index is proportional to either the applied electric field, or is proportional to the square modulus of the electric field. These are the Pockels effect and Kerr effects , respectively.
An alternative is to employ a nonlinear optical material and depend on the optical field intensity to modify the refractive index or magnetic parameters. [ 42 ]
Stacking layers produces NIMs at optical frequencies. However, the surface configuration (non-planar, bulk) of the SRR normally prevents stacking. Although a single-layer SRR structure can be constructed on a dielectric surface, it is relatively difficult to stack these bulk structures due to alignment tolerance requirements. [ 5 ] A stacking technique for SRRs was published in 2007 that uses dielectric spacers to apply a planarization procedure to flatten the SRR layer. [ 43 ] It appears that arbitrary many layers can be made this way, including any chosen number of unit cells and variant spatial arrangements of individual layers. [ 5 ] [ 43 ] [ 44 ]
In 2014 researchers announced a 400 nanometer thick frequency-doubling non-linear mirror that can be tuned to work at near-infrared to mid-infrared to terahertz frequencies. The material operates with much lower intensity light than traditional approaches. For a given input light intensity and structure thickness, the metamaterial produced approximately one million times higher intensity output. The mirrors do not require matching the phase velocities of the input and output waves. [ 45 ]
It can produce giant nonlinear response for multiple nonlinear optical processes, such as second harmonic, sum- and difference-frequency generation, as well a variety of four-wave mixing processes. The demonstration device converted light with a wavelength of 8000 to 4000 nanometers. [ 45 ]
The device is made of a stack of thin layers of indium , gallium and arsenic or aluminum , indium and arsenic. 100 of these layers, each between one and twelve nanometers thick, were faced on top by a pattern of asymmetrical, crossed gold nanostructures that form coupled quantum wells and a layer of gold on the bottom. [ 45 ]
Potential applications include remote sensing and medical applications that call for compact laser systems. [ 45 ]
Dyakonov surface waves [ 46 ] [ 47 ] [ 48 ] [ 49 ] [ 50 ] [ 51 ] [ 52 ] (DSW) relate to birefringence related to photonic crystals, metamaterial anisotropy. [ 53 ] Recently photonic metamaterial operated at 780 nanometer (near-infrared), [ 54 ] [ 55 ] [ 12 ] 813 nm and 772 nm. [ 56 ] [ 57 ] | https://en.wikipedia.org/wiki/Photonic_metamaterial |
Photonics Society of Poland ( Polish : Polskie Stowarzyszenie Fotoniczne, PSP ) is the largest optics / optoelectronics / photonics organization in Poland . It was transformed from the SPIE Poland Chapter on October 18, 2007 during the Extraordinary General Meeting of the SPIE Poland Chapter members.
PSP is a publisher of Photonics Letters of Poland .
Photonics Letters of Poland is a peer-reviewed scientific journal published by the Photonics Society of Poland in cooperation with SPIE four times a year. Founded in 2009. The journal has the following divisions of editorial scope: optical technology ; information processing ; lasers , photonics ; environmental optics ; and biomedical optics . | https://en.wikipedia.org/wiki/Photonics_Society_of_Poland |
Photonics and Nanostructures: Fundamentals and Applications is a peer-reviewed scientific journal , published quarterly by Elsevier . The editors-in-chief are A. Di Falco University of St Andrews , M. Lapine University of Technology Sydney , P. Tassin Chalmers University of Technology , M. Vanwolleghem Centre National de la Recherche Scientifique (CNRS) , Villeneuve-d'Ascq, and L. O'Faolain (W. Whelan-Curtin) Cork Institute of Technology .
This journal covers research in experiment , theory , and applications of photonic crystals and photonic band gaps. Additionally, the journal focuses on topics concerning the development of faster telecommunications and the transition from computer-electronics to computer-photonics . Coverage also includes the general topic of fabrication of photonic crystal structures and devices. Devices at the micro and nano levels are also included. At this size, these are optical waveguides , switches, lasers , components of photonic (optical) integrated circuits , photonic crystal integrated circuits, micro-optical-electro-mechanical-systems (MOEMS), photonic (optical) micro-cavities, and photonic "dots".
This journal is abstracted and indexed in: [ 1 ] [ 2 ]
According to the Journal Citation Reports , the journal has a 2023 impact factor of 2.5. [ 3 ] | https://en.wikipedia.org/wiki/Photonics_and_Nanostructures:_Fundamentals_and_Applications |
A photooxygenation is a light-induced oxidation reaction in which molecular oxygen is incorporated into the product(s). [ 1 ] [ 2 ] Initial research interest in photooxygenation reactions arose from Oscar Raab's observations in 1900 that the combination of light, oxygen and photosensitizers is highly toxic to cells. [ 3 ] Early studies of photooxygenation focused on oxidative damage to DNA and amino acids, [ 2 ] but recent research has led to the application of photooxygenation in organic synthesis and photodynamic therapy . [ 4 ]
Photooxygenation reactions are initiated by a photosensitizer , which is a molecule that enters an excited state when exposed to light of a specific wavelength (e.g. dyes and pigments). The excited sensitizer then reacts with either a substrate or ground state molecular oxygen, starting a cascade of energy transfers that ultimately result in an oxygenated molecule. Consequently, photooxygenation reactions are categorized by the type and order of these intermediates (as type I, type II, or type III [ 5 ] reactions). [ 2 ] [ 3 ]
Photooxygenation reactions are easily confused with a number of processes baring similar names (i.e. photosensitized oxidation). Clear distinctions can be made based on three attributes: oxidation , the involvement of light, and the incorporation of molecular oxygen into the products:
Sensitizers (denoted "Sens") are compounds, such as fluorescein dyes , methylene blue , and polycyclic aromatic hydrocarbons , which are able to absorb electromagnetic radiation (usually in the visible range of the spectrum) and eventually transfer that energy to molecular oxygen or the substrate of photooxygenation process. Many sensitizers, both naturally occurring and synthetic, rely on extensive aromatic systems to absorb light in the visible spectrum. [ 4 ] When sensitizers are excited by light, they reach a singlet state , 1 Sens*. This singlet is then converted into a triplet state (which is more stable), 3 Sens*, via intersystem crossing . The 3 Sens* is what reacts with either the substrate or 3 O 2 in the three types of photooxygenation reactions. [ 6 ]
In classical Lewis structures , molecular oxygen, O 2 , is depicted as having a double bond between the two oxygen atoms. However, the molecular orbitals of O 2 are actually more complex than Lewis structures seem to suggest. The highest occupied molecular orbital (HOMO) of O 2 is a pair of degenerate antibonding π orbitals, π 2px * and π 2py *, which are both singly occupied by spin unpaired electrons. [ 4 ] These electrons are the cause of O 2 being a triplet diradical in the ground state (indicated as 3 O 2 ).
While many stable molecules’ HOMOs consist of bonding molecular orbitals and therefore require a moderate energy jump from bonding to antibonding to reach their first excited state, the antibonding nature of molecular oxygen’s HOMO allows for a lower energy gap between its ground state and first excited state. This makes excitation of O 2 a less energetically restrictive process. In the first excited state of O 2 , a 22 kcal/mol energy increase from the ground state, both electrons in the antibonding orbitals occupy a degenerate π* orbital, and oxygen is now in a singlet state (indicated as 1 O 2 ). [ 3 ] 1 O 2 is very reactive with a lifetime between 10-100 μs. [ 4 ]
The three types of photooxygenation reactions are distinguished by the mechanisms that they proceed through, as they are capable of yielding different or similar products depending on environmental conditions. Type I and II reactions proceed through neutral intermediates, while type III reactions proceed through charged species. The absence or presence of 1 O 2 is what distinguishes type I and type II reactions, respectively. [ 1 ]
In type I reactions, the photoactivated 3 Sens* interacts with the substrate to yield a radical substrate , usually through the homolytic bond breaking of a hydrogen bond on the substrate. This substrate radical then interacts with 3 O 2 (ground state) to yield a substrate-O 2 radical. Such a radical is generally quenched by abstracting a hydrogen from another substrate molecule or from the solvent. This process allows for chain propagation of the reaction.
Type I photooxygenation reactions are frequently used in the process of forming and trapping diradical species. Mirbach et al. reported on one such reaction in which an azo compound is lysed via photolysis to form the diradical hydrocarbon and then trapped in a stepwise fashion by molecular oxygen: [ 7 ]
In type II reactions, the 3 Sens* transfers its energy directly with 3 O 2 via a radiation-less transition to create 1 O 2 . 1 O 2 then adds to the substrate in a variety of ways including: cycloadditions (most commonly [4+2]), addition to double bonds to yield 1,2-dioxetanes , and ene reactions with olefins (the Schenck ene reaction ). [ 2 ]
The [4+2] cycloaddition of singlet oxygen to cyclopentadiene to create cis -2-cyclopentene-1,4-diol is a common step involved in the synthesis of prostaglandins . [ 8 ] The initial addition singlet oxygen, through the concerted [4+2] cycloaddition, forms an unstable endoperoxide . Subsequent reduction of the peroxide bound produces the two alcohol groups.
In type III reactions, there is an electron transfer that occurs between the 3 Sens* and the substrate resulting in an anionic Sens and a cationic substrate. Another electron transfer then occurs where the anionic Sens transfers an electron to 3 O 2 to form the superoxide anion, O 2 − . This transfer returns the Sens to its ground state. The superoxide anion and cationic substrate then interact to form the oxygenated product.
Photooxygenation of indolizines (heterocyclic aromatic derivates of indole) has been investigated in both mechanistic and synthetic contexts. Rather than proceeding through a Type I or Type II photooxygenation mechanism, some investigators have chosen to use 9,10-dicyanoanthracene (DCA) as a photosensitizer, leading to the reaction of an indolizine derivative with the superoxide anion radical. Note that the reaction proceeds through an indolizine radical cation intermediate that has not been isolated (and thus is not depicted): [ 9 ]
All 3 types of photooxygenation have been applied in the context of organic synthesis. In particular, type II photooxygenations have proven to be the most widely used (due to the low amount of energy required to generate singlet oxygen) and have been described as "one of the most powerful methods for the photochemical oxyfunctionalization of organic compounds." [ 10 ] These reactions can proceed in all common solvents and with a broad range of sensitizers.
Many of the applications of type II photooxygenations in organic synthesis come from
Waldemar Adam's investigations into the ene-reaction of singlet oxygen with acyclic alkenes. [ 10 ] Through the cis effect and the presence of appropriate steering groups the reaction can even provide high regioselectively and diastereoselectivity - two valuable stereochemical controls. [ 11 ]
Photodynamic therapy (PDT) uses photooxygenation to destroy cancerous tissue . [ 12 ] A photosensitizer is injected into the tumor and then specific wavelengths of light are exposed to the tissue to excite the Sens. The excited Sens generally follows a type I or II photooxygenation mechanism to result in oxidative damage to cells. Extensive oxidative damage to tumor cells will kill tumor cells. Also, oxidative damage to nearby blood vessels will cause local agglomeration and cut off nutrient supply to the tumor, thus starving the tumor. [ 13 ]
An important consideration when selecting the Sens to be used in PDT is the specific wavelength of light the Sens will absorb to reach an excited state. Since the maximum penetration of tissues is achieved around wavelengths of 800 nm, selecting Sens that absorb around this range is advantageous as it allows for PDT to be affective on tumors beneath the outer most layer of the dermis. The window of 800 nm light is most effective at penetrating tissues because at wavelengths shorter than 800 nm the light starts to be scattered by the macromolecules of cells and at wavelengths longer than 800 nm water molecules will begin to absorb the light and convert it into heat. [ 4 ] | https://en.wikipedia.org/wiki/Photooxygenation |
Photoperiod is the change of day length around the seasons. The rotation of the earth around its axis produces 24 hour changes in light (day) and dark (night) cycles on earth. The length of the light and dark in each phase varies across the seasons due to the tilt of the earth around its axis . The photoperiod defines the length of the light, for example a summer day the length of light could be 16 hours while the dark is 8 hours, whereas a winter day the length of day could be 8 hours, whereas the dark is 16 hours. Importantly, the seasons are different in the northern hemisphere than the southern hemisphere .
Photoperiodism is the physiological reaction of organisms to the length of light or a dark period. It occurs in plants and animals . Plant photoperiodism can also be defined as the developmental responses of plants to the relative lengths of light and dark periods. They are classified under three groups according to the photoperiods: short-day plants, long-day plants, and day-neutral plants.
In animals photoperiodism (sometimes called seasonality) is the suite of physiological changes that occur in response to changes in day length. This allows animals to respond to a temporally changing environment associated with changing seasons as the earth orbits the sun.
In 1920, W. W. Garner and H. A. Allard published their discoveries on photoperiodism and felt it was the length of daylight that was critical, [ 1 ] [ 2 ] but it was later discovered that the length of the night was the controlling factor. [ 3 ] [ 4 ] Photoperiodic flowering plants are classified as long-day plants or short-day plants even though night is the critical factor because of the initial misunderstanding about daylight being the controlling factor. Along with long-day plants and short-day plants, there are plants that fall into a "dual-day length category". These plants are either long-short-day plants (LSDP) or short-long-day plants (SLDP). LSDPs flower after a series of long days followed by short days whereas SLDPs flower after a series of short days followed by long days. [ 5 ] Each plant has a different length critical photoperiod, or critical night length. [ 1 ]
Many flowering plants (angiosperms) use a circadian rhythm together with photoreceptor protein , such as phytochrome or cryptochrome , [ 1 ] to sense seasonal changes in night length, or photoperiod, which they take as signals to flower. In a further subdivision, obligate photoperiodic plants absolutely require a long or short enough night before flowering, whereas facultative photoperiodic plants are more likely to flower under one condition.
Phytochrome comes in two forms: P r and P fr . Red light (which is present during the day) converts phytochrome to its active form (P fr ) which then stimulates various processes such as germination, flowering or branching. In comparison, plants receive more far-red in the shade, and this converts phytochrome from P fr to its inactive form, P r , inhibiting germination. This system of P fr to P r conversion allows the plant to sense when it is night and when it is day. [ 6 ] P fr can also be converted back to P r by a process known as dark reversion, where long periods of darkness trigger the conversion of P fr . [ 7 ] This is important in regards to plant flowering. Experiments by Halliday et al. showed that manipulations of the red-to far-red ratio in Arabidopsis can alter flowering. They discovered that plants tend to flower later when exposed to more red light, proving that red light is inhibitory to flowering. [ 8 ] Other experiments have proven this by exposing plants to extra red-light in the middle of the night. A short-day plant will not flower if light is turned on for a few minutes in the middle of the night and a long-day plant can flower if exposed to more red-light in the middle of the night. [ 9 ]
Cryptochromes are another type of photoreceptor that is important in photoperiodism. Cryptochromes absorb blue light and UV-A. Cryptochromes entrain the circadian clock to light. [ 10 ] It has been found that both cryptochrome and phytochrome abundance relies on light and the amount of cryptochrome can change depending on day-length. This shows how important both of the photoreceptors are in regards to determining day-length. [ 11 ]
Modern biologists believe [ 12 ] that it is the coincidence of the active forms of phytochrome or cryptochrome, created by light during the daytime, with the rhythms of the circadian clock that allows plants to measure the length of the night. Other than flowering, photoperiodism in plants includes the growth of stems or roots during certain seasons and the loss of leaves. Artificial lighting can be used to induce extra-long days. [ 1 ]
Long-day plants flower when the night length falls below their critical photoperiod. [ 13 ] These plants typically flower during late spring or early summer as days are getting longer. In the northern hemisphere, the longest day of the year (summer solstice) is on or about 21 June. [ 14 ] After that date, days grow shorter (i.e. nights grow longer) until 21 December (the winter solstice ). This situation is reversed in the southern hemisphere (i.e., longest day is 21 December and shortest day is 21 June). [ 1 ] [ 2 ]
Some long-day obligate plants are:
Some long-day facultative plants are:
Short-day (also called long-night) plants flower when the night lengths exceed their critical photoperiod. [ 15 ] They cannot flower under short nights or if a pulse of artificial light is shone on the plant for several minutes during the night; they require a continuous period of darkness before floral development can begin. Natural nighttime light, such as moonlight or lightning, is not of sufficient brightness or duration to interrupt flowering. [ 1 ] [ 2 ]
Short-day plants flower as days grow shorter (and nights grow longer) after September 21st in the northern hemisphere, which is during summer or fall. The length of the dark period required to induce flowering differs among species and varieties of a species.
Photoperiodism affects flowering by inducing the shoot to produce floral buds instead of leaves and lateral buds.
Some short-day facultative plants are: [ 16 ]
Day-neutral plants, such as cucumbers , roses , tomatoes , and Ruderalis ( autoflowering cannabis ) do not initiate flowering based on photoperiodism. [ 18 ] Instead, they may initiate flowering after attaining a certain overall developmental stage or age, or in response to alternative environmental stimuli, such as vernalisation (a period of low temperature). [ 1 ] [ 2 ]
Daylength, and thus knowledge of the season of the year, is vital to many animals. A number of biological and behavioural changes are dependent on this knowledge. Together with temperature changes, photoperiod provokes changes in the color of fur and feathers, migration , entry into hibernation , sexual behaviour , and even the resizing of organs.
In insects , sensitivity to photoperiod has been proven to be initiated by photoreceptors located in the brain. [ 19 ] [ 20 ] Photoperiod can affect insects at different life stages, serving as an environmental cue for physiological processes such as diapause induction and termination, and seasonal morphs. [ 21 ] In the water strider Aquarius paludum , for instance, photoperiod conditions during nymphal development have been shown to trigger seasonal changes in wing frequency and also induce diapause, although the threshold critical day lengths for the determination of both traits diverged by about an hour. [ 22 ] In Gerris buenoi , another water strider species, photoperiod has also been shown to be the cause of wing polyphenism , [ 23 ] although the specific daylengths changed between species, suggesting that phenotypic plasticity in response to photoperiod has evolved even between relatively closely related species.
The singing frequency of birds such as the canary depends on the photoperiod. In the spring, when the photoperiod increases (more daylight), the male canary's testes grow. As the testes grow, more androgens are secreted and song frequency increases. During autumn, when the photoperiod decreases (less daylight), the male canary's testes regress and androgen levels drop dramatically, resulting in decreased singing frequency. Not only is singing frequency dependent on the photoperiod but the song repertoire is also. The long photoperiod of spring results in a greater song repertoire. Autumn's shorter photoperiod results in a reduction in song repertoire. These behavioral photoperiod changes in male canaries are caused by changes in the song center of the brain. As the photoperiod increases, the high vocal center (HVC) and the robust nucleus of the archistriatum (RA) increase in size. When the photoperiod decreases, these areas of the brain regress. [ 24 ]
In mammals, daylength is registered in the suprachiasmatic nucleus (SCN), which is informed by retinal light-sensitive ganglion cells , which are not involved in vision. The information travels through the retinohypothalamic tract (RHT). In most species the hormone melatonin is produced by the pineal gland only during the hours of darkness, influenced by the light input through the RHT and by innate circadian rhythms . This hormonal signal, combined with outputs from the SCN inform the rest of the body about the time of day, and the length of time that melatonin is secreted is how the time of year is perceived.
Many mammals, particularly those inhabiting temperate and polar regions, exhibit a remarkable degree of seasonality in response to changes in daylight hours(photoperiod). This seasonality manifests in a broad spectrum of behaviors and physiology, including hibernation, seasonal migrations, and coat color changes. A prime example of the adaptation to photoperiods is the seasonal coat color (SCC) species. [ 25 ] These animals undergo molting, transforming from dark summer fur to white coat in winter, that provides crucial camouflage in snowy environments.
The view has been expressed that humans' seasonality is largely believed to be evolutionary baggage . [ 26 ] [ relevant? ] . Human birth rate varies throughout the year, and the peak month of births appears to vary by latitude. [ 27 ] Seasonality in human birth rate appears to have largely decreased since the industrial revolution. [ 27 ] [ 28 ]
Photoperiodism has also been demonstrated in other organisms besides plants and animals. The fungus Neurospora crassa as well as the dinoflagellate Lingulodinium polyedra and the unicellular green alga Chlamydomonas reinhardtii have been shown to display photoperiodic responses. [ 29 ] [ 30 ] [ 31 ] | https://en.wikipedia.org/wiki/Photoperiodism |
Photopharmacology is an emerging multidisciplinary field that combines photochemistry and pharmacology . [ 1 ] Built upon the ability of light to change the pharmacokinetics and pharmacodynamics of bioactive molecules, it aims at regulating the activity of drugs in vivo by using light. [ 2 ] The light-based modulation is achieved by incorporating molecular photoswitches such as azobenzene and diarylethenes or photocages such as o-nitrobenzyl, coumarin, and BODIPY compounds into the pharmacophore. [ 3 ] This selective activation of the biomolecules helps prevent or minimize off-target activity and systemic side effects. Moreover, light being the regulatory element offers additional advantages such as the ability to be delivered with high spatiotemporal precision, low to negligible toxicity, and the ability to be controlled both qualitatively and quantitatively by tuning its wavelength and intensity. [ 4 ]
Though photopharmacology is a relatively new field, the concept of using light in therapeutic applications came into practice a few decades ago. Photodynamic therapy (PDT) is a well-established clinically practiced protocol in which photosensitizers are used to produce singlet oxygen for destroying diseased or damaged cells or tissues. [ 2 ] Optogenetics is another method that relies on light for dynamically controlling biological functions especially brain and neural. [ 4 ] Though this approach has proven useful as a research tool, its clinical implementation is limited by the requirement for genetic manipulation. Mainly, these two techniques laid the foundation for photopharmacology. Today, it is a rapidly evolving field with diverse applications in both basic research and clinical medicine which has the potential to overcome some of the challenges limiting the range of applications of the other light-guided therapies.
Figure 1. Schematic representation of the mechanism of (a) photopharmacology (b) photodynamic therapy, and (c) optogenetics.
The discovery of natural photoreceptors such as rhodopsins in the eye inspired the biomedical and pharmacology research community to engineer light-sensitive proteins for therapeutic applications. [ 2 ] The development of synthetic photoswitchable molecules is the most significant milestone in the history of light-delivery systems. Scientists are continuing with their efforts to explore new photoswitches and delivery strategies with enhanced performance to target different biological molecules such as ion channels, nucleic acid, and enzyme receptors. Photopharmacology research progressed from in vitro to in vivo studies in a significantly short period of time yielding promising results in both forms. Clinical trials are underway to assess the safety and efficacy of these photopharmacological therapies further and validate their potential as an innovative drug delivery approach.
Molecular photoswitches are utilized in the field of photopharmacology, where the energetics of a molecule can be reversibly controlled with light to achieve spatial and temporal resolution of a particular effect. Photoswitches may function by undergoing photoisomerization through which light is used to conformationally adapt a molecule to a biological site, or through an environmental effect where an external factor such as a solvent effect or hydrogen bonding can selectively allow or quench an emissive state within a molecule. To visualize photophysical processes, a useful depiction is the Jablonski diagram . This is a diagram which depicts electronic and vibrational energy levels within a molecule as vertical levels and shows the possible relaxation pathways from excited states. Typically, the ground state is referred to as S 0 , and is drawn at the bottom of the figure with nearby vibrational excitations just above it. An absorption will promote an electron into the S 1 state at any vibrational energy level, or into a higher order excited state if the absorbed energy has enough magnitude. The excited state can then undergo internal conversion which is the electronic relaxation to a lower state with the same vibrational energetics or vibrational relaxation within a state. This may be followed by an intersystem crossing wherein the electron undergoes a spin flip, or a radiative or nonradiative decay back to the ground state. [ 5 ]
One example of an organic compound that undergoes photoisomerization is azobenzene . The structure is two phenyl rings joined with a N=N double bond and is the simplest aryl azo compound. Azobenzene and its derivatives have two accessible absorbance bands: the S 1 state from a n-π* transition which can be excited into using blue light, and the S 2 state from a π-π* transition that can be excited into using ultraviolet light. [ 6 ] Azobenzene and its derivatives have two isomers, trans and cis. The trans isomer, having the phenyl rings on opposite sides of the azo double bond, is the thermally preferred isomer as there is less stereoelectronic distortion and more delocalization present. However, excitation of the trans isomer to the S 2 state facilitates a shift to the cis isomer. The S 1 absorption is associated with a conversion back to the trans isomer. In this way, azobenzene and its derivatives can act as reversible stores of energy by maintaining a strained configuration in the cis isomer. Modifications of the substituents on azobenzene allow the energetics of these absorptions to be tuned, and if they are engineered such that the two absorption bands overlap a single wavelength of light can be used to flip between them. There are a number of similar photoswitches which isomerize between E and Z configurations across an azo group (for instance, azobenzene and azopyrazole) or an ethylene bridge (for instance, stilbene and hemithioindigo). [ 7 ]
Alternatively, photoswitches may themselves be emissive and exhibit environmental control over their properties. One such example is a class of ruthenium polypyridyl coordination complexes. Typically they contain two bidentate bipyridine or phenanthroline ligands and an extended phenanthroline-phenazine bidentate ligand such as dipyrido[3,2-a:2,3-c]phenazine (dppz). [ 8 ] These complexes have an accessible metal to ligand charge transfer excited state ( 1 MLCT) which undergoes rapid intersystem crossing to a 3 MLCT state due to the strong spin-orbit coupling of the ruthenium center. These excited states are localized on the extended ligand phenazine nitrogens, and emission occurs from the 3 MLCT state. Hydrogen bonding interactions such as the presence of water around these nitrogen atoms stabilizes the 3 MLCT state, quenching the emission process. Thus, by controlling whether an aqueous or otherwise protic polar solvent is present, emissive behaviors can be “turned on/off”, and alternation between “bright states” and “dark states” is facilitated. This light switch behavior makes these and similar complexes of recent interest in photopharmacological applications such as photodynamic therapy.
As previously mentioned, photopharmacology relies on the use of molecular photoswitches being incorporated into the structure of biologically active molecules which allows their potency to be controlled optically. [ 7 ] They are introduced into the structure of bioactive compounds via insertion, extension, or bioisosteric replacement. [ 7 ] These incorporations can be supported by structural considerations of the molecule or SAR (structure-activity relationship) analysis to determine the optimal position. [ 7 ] Some examples of photoswitchable molecules commonly used in photopharmacology are azobenzenes, diarylethenes, and photocages. [ 9 ]
Azobenzenes are a class of photoswitchable molecules and are used in photopharmalogical applications for their reversible photoisomerization, as described in the previous section. An example of a photoswitchable molecule that uses azobenzene is phototrexate. Phototrexate is an inhibitor of human dihydrofolate reductase and is an analogue of methotrexate , a chemotherapy agent. [ 10 ] When in its photoactive cis form, phototrexate has been shown to be a potent antifolate and is relatively inactive when in the trans form. [ 10 ] The azologization, or incorporation of azobenzene, of methotrexate allows for control of cytotoxic activity and is considered a step forward in developing targeted anticancer drugs with localized efficacy. [ 10 ]
Diarylethene photoswitches have reversible cyclization and cycloreversion reactions that are photoinduced. [ 11 ] They are a class of compounds that have aromatic functional groups bonded to each end of a carbon-carbon double bond. An example of this class of molecule that is used in photopharmacology is stilbene . Under the influence of light, stilbene switches between its two isomers (E and Z).
Figure 4. Figure showing stilbene isomerizations under light from E to Z.
Diarylethenes have been shown to have some advantages over the more researched azobenzenes switches, such as thermal irreversibility, high photoswitching efficiency, favorable cellular stability, and low toxicity. [ 11 ] Diarylethenes have been shown to have promise in fields other than photopharmacology as well. These fields include optical data storage, optoelectronic devices, supramolecular self-assembly and anti-counterfeiting. [ 11 ]
A class of substances known as photocages contain “photosensitive groups, also known as 'photoremovable protecting groups", from which target substances are released upon exposure to specific wavelengths of light”. [ 12 ] The photosensitive groups physically and chemically protect the target from being released until the molecule undergoes photoreaction. [ 12 ] Due to these interactions with light, they are commonly used molecules in photopharmacology. More recently, they have played an important role in photoactivated chemotherapy (PACT). In PACT, photocages utilize a photoremovable protecting group that protects cytotoxic drugs until the bond is cleaved via light interaction and the cytotoxic drug is released. [ 13 ] Some well-known photocages include “o-nitrobenzyl derivatives, coumarin derivatives, BODIPY, xanthene derivatives, quinone and diarylenes derivatives”. [ 12 ] However, there are limitations with using photocages in clinical applications as there are not many PPGs that can be used in vivo. This is due to PPG-payload conjugates needing to have acceptable solubility and biological inertness for biocompatibility and the need for efficient uncaging above 600 nm. [ 13 ]
Figure 5. Example of a photocage release system activated by NIR. [ 14 ]
Photopharmacology, the use of light to control the activity of drugs, has emerged as a promising approach to drug delivery and therapy. By harnessing the power of light, researchers can achieve precise control over drug release and activation, offering new possibilities for targeted and personalized treatments. This subsection explores the application of photopharmacology in drug delivery, focusing on recent advancements and potential clinical applications.
In one study, [ 15 ] researchers designed HDAC inhibitors which can be activated or deactivated with light, providing precise therapeutic control. This approach could reduce the side effects of traditional chemotherapy by targeting inhibitors to specific body areas, potentially leading to more effective and personalized cancer treatments.
In another study, [ 16 ] the researchers developed a strategy to attach a photoswitchable group to a common antibiotic; ciprofloxacin. By attaching the photoswitchable group, researchers can control the activity of ciprofloxacin with light. This approach could potentially lead to new ways of treating bacterial infections, with the ability to switch the antibiotic's activity on and off as needed.
In this paper [ 17 ] an in vitro protocol to test different light wavelengths on human cancer cell lines is developed, finding that blue light most effectively inhibited cell growth. This suggests that photopharmacology could offer new cancer treatment options by targeting specific light wavelengths to modulate drug activity in tumor cells.
Another application of photopharmacology [ 18 ] is developing a luminescent photoCORM grafted on carboxymethyl chitosan, which, when exposed to light, releases carbon monoxide (CO) to induce apoptotic death in colorectal cancer cells, demonstrating precise control over CO release for targeted cancer therapy.
Researchers developed a toolbox of photoswitchable antagonists that can interact with GPCRs, a class of proteins involved in various cellular processes. [ 19 ] By using light to switch the activity of these antagonists, researchers can control the interaction between the antagonists and GPCRs in real time. This approach allows for precise modulation of GPCR activity, which could lead to new insights into cellular signaling pathways and potential therapeutic applications.
In another application [ 20 ] by using light to control the assembly of nanopores, researchers can potentially regulate the flow of ions or molecules through these nanopores. This approach could have applications in various fields, including sensing, drug delivery, and nanotechnology.
Another paper [ 21 ] reports on the use of photopharmacology to control drug activity; multifunctional fibers in the study deliver light and drugs to specific body areas. Implanted fibers activate light-responsive drugs, altering their structure, and offering precise drug delivery for conditions needing exact timing or dosage.
In another study, [ 22 ] ligands were designed to switch their binding mode to G-quadruplex DNA upon exposure to visible light. This method could potentially modulate the activity of G-quadruplex DNA, crucial in gene expression and telomere maintenance, offering new therapeutic avenues, particularly in cancer treatment. The study underscores photopharmacology's promise in targeting specific DNA structures, suggesting G-quadruplex DNA as a viable target for future photopharmacological interventions.
Another study [ 23 ] developed photoactivatable antibody-photoCORM conjugates targeting human ovarian cancer cells, releasing CO upon light exposure to diminish cell viability. This approach offers precise cancer cell targeting while minimizing harm to healthy tissue, showcasing the potential of photopharmacology in cancer therapy.
In another paper, [ 24 ] a photoactivatable compound that binds to and modulates the activity of the CRY1 protein, regulating the mammalian circadian clock, was developed. By using light to control the compound's activity, researchers can potentially treat circadian rhythm disorders and related health conditions by modulating the function of CRY1. Photopharmacology involves using light to control the activity of drugs.
In another application [ 25 ] researchers used photopharmacology to control drug release and focus on a drug interacting with tubulin, visualizing its release in real time with time-resolved serial crystallography. This technique offers insights into drug-tubulin interactions and demonstrates the potential for designing drugs with precise actions.
The future of photopharmacology holds immense promise. It has the potential to revolutionize conventional drug therapy offering new avenues for precision medicine, treating neurological disorders, and in the field of oncology and ophthalmology . [ 1 ] Additionally, it holds promise for the field of regenerative medicine where photoswitches can be used to modulate the activity of signaling pathways for targeted tissue repair and regeneration. [ 3 ]
Photopharmacology will continue to grow and expand with the new discoveries and advances happening in other related fields such as synthetic chemistry, biology, nanotechnology, pharmacology, and bioengineering. While the potential of photopharmacology is vast, some challenges must be addressed to make it a clinical reality. One challenge is the development of stable and biocompatible photoswitches that are selective for their target receptors without cross-activity. [ 2 ] It is particularly important that these photoswitchables have their absorbance bands fall within the wavelength range of 650 nm to 900 nm. [ 2 ] Hence, optimum molecular designing of photoswitches is required to achieve the characteristics mentioned above and desired level of performance. At present, photopharmacology uses a rational drug design approach based on studying the structure-activity relationship, however, a phenotypic screening for photoswitchable drugs could also be beneficial.
In order to achieve good spatial-temporal control over drug activity there should be a significant difference between the activity of isomers. However, understanding the structural changes during the biological effects induced by photoswitching is limited. This scarcity of knowledge is also a challenge for the growth of this field, as it hampers the optimization of the activity and potency of the isomers to obtain the expected outcomes during applications. [ 3 ]
The biggest challenge in photopharmacology is finding appropriate and effective ways to deliver light to deep tissues in the body and tissues avoiding issues such as scattering and absorption. Various strategies have been attempted in this regard, one being the development of photoswitchable ligands that respond to deep-tissue penetrating wavelengths like red or infrared light. Moreover, some recent preclinical studies have spurred the development of wireless, compact or injectable, and remotely controllable devices capable of delivering light to neural tissues with minimal damage. [ 26 ] There are novel optofluidic systems that can simultaneously regulate both drug delivery and light activity at specific sites. Although external delivery of light is the most preferred method, the use of internal exogenous light sources such as luminescent compounds where light would be delivered directly at the site of action. This could avoid the issues related to light penetration and also enhance the degree of selectivity. In addition, this creates the opportunity to use photopharmacology as a theranostic approach that combines targeted drug delivery and molecular imaging. [ 2 ] | https://en.wikipedia.org/wiki/Photopharmacology |
In biology , photophobia (adjective: photophobic ) is negative response to light.
Photophobia is a behavior demonstrated by insects or other animals which seek to stay out of the light.
In botany , the term photophobia/photophobic describes shade-loving plants (sciophytes) that thrive in low light conditions. [ 1 ]
Photophobia (or photophobic response) may also refer to a negative phototaxis or phototropism response.
This biology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Photophobia_(biology) |
A photophore is a glandular organ that appears as luminous spots on marine animals, including fish and cephalopods . The organ can be simple, or as complex as the human eye, equipped with lenses, shutters, color filters, and reflectors; unlike an eye, however, it is optimized to produce light, not absorb it. [ 1 ]
The bioluminescence can be produced from compounds during the digestion of prey, from specialized mitochondrial cells in the organism called photocytes ("light producing" cells), or, similarly, associated with symbiotic bacteria in the organism that are cultured . [ citation needed ]
The character of photophores is important in the identification of deep sea fishes . Photophores on fish are used for attracting food or for camouflage from predators by counter-illumination . [ citation needed ]
Photophores are found on some cephalopods including the firefly squid , which can create impressive light displays, as well as numerous other deep sea organisms, such as the pocket shark Mollisquama mississippiensis and the strawberry squid . [ 1 ] [ 2 ] | https://en.wikipedia.org/wiki/Photophore |
Photophoresis denotes the phenomenon that small particles suspended in gas (aerosols) or liquids (hydrocolloids) start to migrate when illuminated by a sufficiently intense beam of light . The existence of this phenomenon is owed to a non-uniform distribution of temperature of an illuminated particle in a fluid medium. [ 1 ] Separately from photophoresis, in a fluid mixture of different kinds of particles, the migration of some kinds of particles may be due to differences in their absorptions of thermal radiation and other thermal effects collectively known as thermophoresis . In laser photophoresis, particles migrate once they have a refractive index different from their surrounding medium. The migration of particles is usually possible when the laser is slightly or not focused. A particle with a higher refractive index compared to its surrounding molecule moves away from the light source due to momentum transfer from absorbed and scattered light photons. This is referred to as a radiation pressure force. This force depends on light intensity and particle size but has nothing to do with the surrounding medium [ clarification needed ] . Just like in Crookes radiometer , light can heat up one side and gas molecules bounce from that surface with greater velocity, hence push the particle to the other side. Under certain conditions, with particles of diameter comparable to the wavelength of light, the phenomenon of a negative indirect photophoresis occurs, due to the unequal heat generation on the laser irradiation between the back and front sides of particles, this produces a temperature gradient in the medium around the particle such that molecules at the far side of the particle from the light source may get to heat up more, causing the particle to move towards the light source. [ 2 ]
If the suspended particle is rotating, it will also experience the Yarkovsky effect .
Discovery of photophoresis is usually attributed to Felix Ehrenhaft in the 1920s, though earlier observations were made by others including Augustin-Jean Fresnel .
The applications of photophoresis expand into the various divisions of science, thus physics, chemistry as well as in biology. Photophoresis is applied in particle trapping and levitation, [ 3 ] in the field flow fractionation of particles, [ 4 ] in the determination of thermal conductivity and temperature of microscopic grains [ 5 ] and also in the transport of soot particles in the atmosphere. [ 6 ] The use of light in the separation of particles aerosols based on their optical properties, makes possible the separation of organic and inorganic particles of the same aerodynamic size . [ 7 ]
Recently, photophoresis has been suggested as a chiral sorting mechanism for single walled carbon nanotubes. [ 8 ] The proposed method would utilise differences in the absorption spectra of semiconducting carbon nanotubes arising from optically excited transitions in electronic structure. If developed the technique would be orders of magnitudes faster than currently established ultracentrifugation techniques.
In 2021 Azadi, Popov et al. report "light-driven levitation of macroscopic polymer films with nanostructured surface as candidates for long-duration near-space flight" Using a light intensity comparable to sunlight, they levitated centimeter-scale disks made of commercial 0.5-micron-thick mylar film coated with carbon nanotubes on one side. [ 9 ] Experiments by Schafer, Kim, Vlassak and Keith suggest that photophoretic forces could levitate thin 10 centimetre-scale structures in Earth′s stratosphere indefinitely for the purpose of atmospheric science, especially monitoring high-altitude weather. They describe in 2022 a preliminary design fabricated with available methods of a 10 cm diameter device combining a levitating structure of two membranes [ 10 ] 2 μm apart in a stiff support structure tested to have sufficient strength to withstand transport, deployment, and flight at 25 km altitude. Payload capacity is 300 mg and could support bidirectional radio communication at over 10 Mb/s and some navigational control. By upscaling the structure it might carry payloads of a few grams. They suggest uses for telecommunications, and deployment on Mars. [ 11 ]
Direct photophoresis is caused by the transfer of photon momentum to a particle by refraction and reflection. [ 12 ] Movement of particles in the forward direction occurs when the particle is transparent and has an index of refraction larger compared to its surrounding medium. [ 7 ] Indirect photophoresis occurs as a result of an increase in the kinetic energy of molecules when particles absorb incident light only on the irradiated side, thus creating a temperature gradient within the particle. In this situation the surrounding gas layer reaches temperature equilibrium with the surface of the particle. Molecules with higher kinetic energy in the region of higher gas temperature impinge on the particle with greater momenta than molecules in the cold region; this causes a migration of particles in a direction opposite to the surface temperature gradient. The component of the photophoretic force responsible for this phenomenon is called the radiometric force. [ 13 ] This comes as a result of uneven distribution of radiant energy (source function within a particle).
Indirect photophoretic force depends on the physical properties of the particle and the surrounding medium.
For pressures p {\displaystyle p} , where the free mean path of the gas is much larger than the characteristic size r 0 {\displaystyle r_{0}} of the suspended particle (direct photophoresis), the longitudinal force is [ 14 ]
where the mean temperature of the scattered gas is (thermal accommodation coefficient α {\displaystyle \alpha } , momentum accommodation coefficient α m {\displaystyle \alpha _{\text{m}}} )
and the black body temperature of the particle (net light flux I = ε I 0 {\displaystyle I=\varepsilon \,I_{0}} , Stefan Boltzmann constant σ SB {\displaystyle \sigma _{\text{SB}}} , temperature of the radiation field T opt {\displaystyle T_{\text{opt}}} )
k {\displaystyle k} is the thermal conductivity of the particle.
The asymmetry factor for spheres J 1 {\displaystyle J_{1}} is usually 1 / 2 {\displaystyle 1/2} (positive longitudinal photophoresis).
For non-spherical particles, the average force exerted on the particle is given by the same equation where the radius r 0 {\displaystyle r_{0}} is now the radius of the respective volume-equivalent sphere. [ 15 ] | https://en.wikipedia.org/wiki/Photophoresis |
In the process of photosynthesis , the phosphorylation of ADP to form ATP using the energy of sunlight is called photophosphorylation . Cyclic photophosphorylation occurs in both aerobic and anaerobic conditions, driven by the main primary source of energy available to living organisms, which is sunlight. All organisms produce a phosphate compound, ATP , which is the universal energy currency of life. In photophosphorylation, light energy is used to pump protons across a biological membrane, mediated by flow of electrons through an electron transport chain . This stores energy in a proton gradient . As the protons flow back through an enzyme called ATP synthase , ATP is generated from ADP and inorganic phosphate. ATP is essential in the Calvin cycle to assist in the synthesis of carbohydrates from carbon dioxide and NADPH .
Both the structure of ATP synthase and its underlying gene are remarkably similar in all known forms of life. ATP synthase is powered by a transmembrane electrochemical potential gradient , usually in the form of a proton gradient. In all living organisms, a series of redox reactions is used to produce a transmembrane electrochemical potential gradient, or a so-called proton motive force (pmf).
Redox reactions are chemical reactions in which electrons are transferred from a donor molecule to an acceptor molecule. The underlying force driving these reactions is the Gibbs free energy of the reactants relative to the products. If donor and acceptor (the reactants) are of higher free energy than the reaction products, the electron transfer may occur spontaneously. The Gibbs free energy is the energy available ("free") to do work. Any reaction that decreases the overall Gibbs free energy of a system will proceed spontaneously (given that the system is isobaric and also at constant temperature), although the reaction may proceed slowly if it is kinetically inhibited.
The fact that a reaction is thermodynamically possible does not mean that it will actually occur. A mixture of hydrogen gas and oxygen gas does not spontaneously ignite. It is necessary either to supply an activation energy or to lower the intrinsic activation energy of the system, in order to make most biochemical reactions proceed at a useful rate. Living systems use complex macromolecular structures to lower the activation energies of biochemical reactions.
It is possible to couple a thermodynamically favorable reaction (a transition from a high-energy state to a lower-energy state) to a thermodynamically unfavorable reaction (such as a separation of charges, or the creation of an osmotic gradient), in such a way that the overall free energy of the system decreases (making it thermodynamically possible), while useful work is done at the same time. The principle that biological macromolecules catalyze a thermodynamically unfavorable reaction if and only if a thermodynamically favorable reaction occurs simultaneously, underlies all known forms of life.
The transfer of electrons from a donor molecule to an acceptor molecule can be spatially separated into a series of intermediate redox reactions. This is an electron transport chain (ETC). Electron transport chains often produce energy in the form of a transmembrane electrochemical potential gradient. The gradient can be used to transport molecules across membranes. Its energy can be used to produce ATP or to do useful work, for instance mechanical work of a rotating bacterial flagella .
This form of photophosphorylation occurs on the stroma lamella, or fret channels. In cyclic photophosphorylation, the high-energy electron released from P700, a pigment in a complex called photosystem I , flows in a cyclic pathway. The electron starts in photosystem I, passes from the primary electron acceptor to ferredoxin and then to plastoquinone , next to cytochrome b 6 f (a similar complex to that found in mitochondria ), and finally to plastocyanin before returning to photosystem I. This transport chain produces a proton-motive force, pumping H + ions across the membrane and producing a concentration gradient that can be used to power ATP synthase during chemiosmosis . This pathway is known as cyclic photophosphorylation, and it produces neither O 2 nor NADPH. Unlike non-cyclic photophosphorylation, NADP + does not accept the electrons; they are instead sent back to the cytochrome b 6 f complex. [ citation needed ]
In bacterial photosynthesis, a single photosystem is used, and therefore is involved in cyclic photophosphorylation.
It is favored in anaerobic conditions and conditions of high irradiance and CO 2 compensation points. [ citation needed ]
The other pathway, non-cyclic photophosphorylation, is a two-stage process involving two different chlorophyll photosystems in the thylakoid membrane. First, a photon is absorbed by chlorophyll pigments surrounding the reaction core center of photosystem II. The light excites an electron in the pigment P680 at the core of photosystem II, which is transferred to the primary electron acceptor, pheophytin , leaving behind P680 + . The energy of P680 + is used in two steps to split a water molecule into 2H + + 1/2 O 2 + 2e - ( photolysis or light-splitting ). An electron from the water molecule reduces P680 + back to P680, while the H + and oxygen are released. The electron transfers from pheophytin to plastoquinone (PQ), which takes 2e - (in two steps) from pheophytin, and two H + Ions from the stroma to form PQH 2 . This plastoquinol is later oxidized back to PQ, releasing the 2e - to the cytochrome b 6 f complex and the two H + ions into the thylakoid lumen . The electrons then pass through Cyt b 6 and Cyt f to plastocyanin , using energy from photosystem I to pump hydrogen ions (H + ) into the thylakoid space. This creates a H + gradient, making H + ions flow back into the stroma of the chloroplast, providing the energy for the (re)generation of ATP. [ citation needed ]
The photosystem II complex replaced its lost electrons from H 2 O, so electrons are not returned to photosystem II as they would in the analogous cyclic pathway. Instead, they are transferred to the photosystem I complex, which boosts their energy to a higher level using a second solar photon. The excited electrons are transferred to a series of acceptor molecules, but this time are passed on to an enzyme called ferredoxin-NADP + reductase , which uses them to catalyze the reaction
This consumes the H + ions produced by the splitting of water, leading to a net production of 1/2O 2 , ATP, and NADPH + H + with the consumption of solar photons and water.
The concentration of NADPH in the chloroplast may help regulate which pathway electrons take through the light reactions. When the chloroplast runs low on ATP for the Calvin cycle , NADPH will accumulate and the plant may shift from noncyclic to cyclic electron flow.
In 1950, first experimental evidence for the existence of photophosphorylation in vivo was presented by Otto Kandler using intact Chlorella cells and interpreting his findings as light-dependent ATP formation. [ 1 ] In 1954, Daniel I. Arnon et.al. discovered photophosphorylation in vitro in isolated chloroplasts with the help of P 32 . [ 2 ] His first review on the early research of photophosphorylation was published in 1956. [ 3 ] | https://en.wikipedia.org/wiki/Photophosphorylation |
Photopigments are unstable pigments that undergo a chemical change when they absorb light. The term is generally applied to the non-protein chromophore moiety of photosensitive chromoproteins , such as the pigments involved in photosynthesis and photoreception . In medical terminology, "photopigment" commonly refers to the photoreceptor proteins of the retina . [ 1 ]
Photosynthetic pigments convert light into biochemical energy. Examples for photosynthetic pigments are chlorophyll , carotenoids and phycobilins . [ 2 ] These pigments enter a high-energy state upon absorbing a photon which they can release in the form of chemical energy. This can occur via light-driven pumping of ions across a biological membrane (e.g. in the case of the proton pump bacteriorhodopsin ) or via excitation and transfer of electrons released by photolysis (e.g. in the photosystems of the thylakoid membranes of plant chloroplasts ). [ 2 ] In chloroplasts , the light-driven electron transfer chain in turn drives the pumping of protons across the membrane. [ 2 ]
The pigments in photoreceptor proteins either change their conformation or undergo photoreduction when they absorb a photon. [ 3 ] This change in the conformation or redox state of the chromophore then affects the protein conformation or activity and triggers a signal transduction cascade. [ 3 ]
Examples of photoreceptor pigments include: [ 4 ]
In medical terminology, the term photopigment is applied to opsin -type photoreceptor proteins, specifically rhodopsin and photopsins , the photoreceptor proteins in the retinal rods and cones of vertebrates that are responsible for visual perception , but also melanopsin and others. [ 5 ] | https://en.wikipedia.org/wiki/Photopigment |
A photoplethysmogram ( PPG ) is an optically obtained plethysmogram that can be used to detect blood volume changes in the microvascular bed of tissue. [ 1 ] [ 2 ] A PPG is often obtained by using a pulse oximeter which illuminates the skin and measures changes in light absorption. [ 3 ] A conventional pulse oximeter monitors the perfusion of blood to the dermis and subcutaneous tissue of the skin.
With each cardiac cycle the heart pumps blood to the periphery. Even though this pressure pulse is somewhat damped by the time it reaches the skin, it is enough to distend the arteries and arterioles in the subcutaneous tissue. If the pulse oximeter is attached without compressing the skin, a pressure pulse can also be seen from the venous plexus, as a small secondary peak.
The change in volume caused by the pressure pulse is detected by illuminating the skin with the light from a light-emitting diode (LED) and then measuring the amount of light either transmitted or reflected to a photodiode. [ 4 ] Each cardiac cycle appears as a peak, as seen in the figure. Because blood flow to the skin can be modulated by multiple other physiological systems, the PPG can also be used to monitor breathing, hypovolemia , and other circulatory conditions. [ 5 ] Additionally, the shape of the PPG waveform differs from subject to subject, and varies with the location and manner in which the pulse oximeter is attached.
Although PPG sensors are in common use in a number of commercial and clinical applications, the exact mechanisms determining the shape of the PPG waveform are not yet fully understood. [ 6 ]
While pulse oximeters are commonly used medical devices , the PPG signal they record is rarely displayed and is nominally only processed to determine blood oxygenation and heart rate . [ 2 ] The PPG can be obtained from transmissive absorption (as at the finger tip) or reflection (as on the forehead). [ 2 ]
In outpatient settings, pulse oximeters are commonly worn on the finger. However, in cases of shock, hypothermia , etc., blood flow to the periphery can be reduced, resulting in a PPG without a discernible cardiac pulse. [ 7 ] In this case, a PPG can be obtained from a pulse oximeter on the head, with the most common sites being the ear, nasal septum , and forehead. PPG can also be configured for multi-site photoplethysmography (MPPG), e.g. by making simultaneous measurements from the right and left ear lobes, index fingers and great toes, and offering further opportunities for the assessment of patients with suspected peripheral arterial disease, autonomic dysfunction, endothelial dysfunction, and arterial stiffness. MPPG also offers significant potential for data mining, e.g. using deep learning, as well as a range of other innovative pulse wave analysis techniques. [ 8 ] [ 9 ] [ 10 ] [ 11 ]
Motion artifacts are often a limiting factor preventing accurate readings during exercise and free living conditions. [ 6 ]
Because the skin is so richly perfused, it is relatively easy to detect the pulsatile component of the cardiac cycle. The DC component of the signal is attributable to the bulk absorption of the skin tissue, while the AC component is directly attributable to variation in blood volume in the skin caused by the pressure pulse of the cardiac cycle.
The height of AC component of the photoplethysmogram is proportional to the pulse pressure, the difference between the systolic and diastolic pressure in the arteries. As seen in the figure showing premature ventricular contractions (PVCs), the PPG pulse for the cardiac cycle with the PVC results in lower amplitude blood pressure and a PPG. Ventricular tachycardia and ventricular fibrillation can also be detected. [ 12 ]
Respiration affects the cardiac cycle by varying the intrapleural pressure, the pressure between the thoracic wall and the lungs. Since the heart resides in the thoracic cavity between the lungs, the partial pressure of inhaling and exhaling greatly influence the pressure on the vena cava and the filling of the right atrium.
During inspiration, intrapleural pressure decreases by up to 4 mm Hg, which distends the right atrium, allowing for faster filling from the vena cava, increasing ventricular preload, but decreasing stroke volume. Conversely during expiration, the heart is compressed, decreasing cardiac efficiency and increasing stroke volume. When the frequency and depth of respiration increases, the venous return increases, leading to increased cardiac output. [ 14 ]
Much research has focused on estimating respiratory rate from the photoplethysmogram, [ 15 ] as well as more detailed respiratory measurements such as inspiratory time. [ 16 ]
Anesthesiologists must often judge subjectively whether a patient is sufficiently anesthetized for surgery. As seen in the figure, if a patient is not sufficiently anesthetized, the sympathetic nervous system response to an incision can generate an immediate response in the amplitude of the PPG. [ 13 ]
Shamir, Eidelman, et al. studied the interaction between inspiration and removal of 10% of a patient’s blood volume for blood banking before surgery. [ 17 ] They found that blood loss could be detected both from the photoplethysmogram from a pulse oximeter and an arterial catheter. Patients showed a decrease in the cardiac pulse amplitude caused by reduced cardiac preload during exhalation when the heart is being compressed.
The FDA reportedly provided clearance to a photoplethysmography-based cuffless blood pressure monitor in August 2019. [ 18 ]
While photoplethysmography commonly requires some form of contact with the human skin (e.g., ear, finger), remote photoplethysmography allows physiological processes such as blood flow to be determined without skin contact. This is achieved by using face video to analyze subtle momentary changes in the subject's skin color which are not detectable to the human eye. [ 19 ] [ 20 ] Such camera-based measurement of blood oxygen levels provides a contactless alternative to conventional photoplethysmography. For instance, it can be used to monitor the heart rate of newborn babies, [ 21 ] or analyzed with deep neural networks to quantify stress levels. [ 11 ]
Remote photoplethysmography can also be performed by digital holography , which is sensitive to the phase of light waves, and hence can reveal sub-micron out-of-plane motion. In particular, wide-field imaging of pulsatile motion induced by blood flow can be measured on the thumb by digital holography . The results are comparable to blood pulse monitored by plethysmography during an occlusion-reperfusion experiment. [ 22 ] A major advantage of this system is that no physical contact with the studied tissue surface area is required. The two major limitations of this approach are (i) the off-axis interferometric configuration that reduces the available spatial bandwidth of the sensor array , and (ii) the use of short-time Fourier transform (via discrete Fourier transform ) analysis that filters-off physiological signals.
Principal component analysis of digital holograms [ 23 ] reconstructed from digitized interferograms acquired at rates beyond ~1000 frames per second reveals surface waves on the hand. This method is an efficient way of performing digital holography from on-axis interferograms, which alleviates both the spatial bandwidth reduction of the off-axis configuration and the filtering of physiological signals. A higher spatial bandwidth is crucial for larger image field of view.
A refinement of holographic photoplethysmography, holographic laser Doppler imaging , enables non-invasive blood flow pulse wave monitoring in blood vessels of the retina , choroid , conjunctiva , and iris . [ 24 ] In particular, laser Doppler holography of the eye fundus, the choroid constitutes the predominant contribution to the high frequency laser Doppler signal. It is however possible to circumvent its influence by subtracting the spatially averaged baseline signal, and achieve high temporal resolution and full-field imaging capability of pulsatile blood flow. | https://en.wikipedia.org/wiki/Photoplethysmograph |
A photopolymer or light-activated resin is a polymer that changes its properties when exposed to light, often in the ultraviolet or visible region of the electromagnetic spectrum . [ 1 ] These changes are often manifested structurally, for example hardening of the material occurs as a result of cross-linking when exposed to light. An example is shown below depicting a mixture of monomers , oligomers , and photoinitiators that conform into a hardened polymeric material through a process called curing . [ 2 ] [ 3 ]
A wide variety of technologically useful applications rely on photopolymers; for example, some enamels and varnishes depend on photopolymer formulation for proper hardening upon exposure to light. In some instances, an enamel can cure in a fraction of a second when exposed to light, as opposed to thermally cured enamels which can require half an hour or longer. [ 4 ] Curable materials are widely used for medical, printing, and photoresist technologies.
Changes in structural and chemical properties can be induced internally by chromophores that the polymer subunit already possesses, or externally by addition of photosensitive molecules. Typically a photopolymer consists of a mixture of multifunctional monomers and oligomers in order to achieve the desired physical properties, and therefore a wide variety of monomers and oligomers have been developed that can polymerize in the presence of light either through internal or external initiation . Photopolymers undergo a process called curing, where oligomers are cross-linked upon exposure to light, forming what is known as a network polymer . The result of photo-curing is the formation of a thermoset network of polymers. One of the advantages of photo-curing is that it can be done selectively using high energy light sources, for example lasers , however, most systems are not readily activated by light, and in this case a photoinitiator is required. Photoinitiators are compounds that upon radiation of light decompose into reactive species that activate polymerization of specific functional groups on the oligomers. [ 5 ] An example of a mixture that undergoes cross-linking when exposed to light is shown below. The mixture consists of monomeric styrene and oligomeric acrylates . [ 6 ]
Most commonly, photopolymerized systems are typically cured through UV radiation, since ultraviolet light is more energetic. However, the development of dye-based photoinitiator systems have allowed for the use of visible light , having the potential advantages of being simpler and safer to handle. [ 7 ] UV curing in industrial processes has greatly expanded over the past several decades. Many traditional thermally cured and solvent -based technologies can be replaced by photopolymerization technologies. The advantages of photopolymerization over thermally cured polymerization include higher rates of polymerization and environmental benefits from elimination of volatile organic solvents . [ 1 ]
There are two general routes for photoinitiation: free radical and ionic . [ 1 ] [ 4 ] The general process involves doping a batch of neat polymer with small amounts of photoinitiator, followed by selective radiation of light, resulting in a highly cross-linked product. Many of these reactions do not require solvent which eliminates termination path via reaction of initiators with solvent and impurities, in addition to decreasing the overall cost. [ 8 ]
In ionic curing processes, an ionic photoinitiator is used to activate the functional group of the oligomers that are going to participate in cross-linking . Typically photopolymerization is a very selective process and it is crucial that the polymerization takes place only where it is desired to do so. In order to satisfy this, liquid neat oligomer can be doped with either anionic or cationic photoinitiators that will initiate polymerization only when radiated with light . Monomers , or functional groups, employed in cationic photopolymerization include: styrenic compounds, vinyl ethers , N-vinyl carbazoles , lactones , lactams, cyclic ethers , cyclic acetals , and cyclic siloxanes . The majority of ionic photoinitiators fall under the cationic class; anionic photoinitiators are considerably less investigated. [ 5 ] There are several classes of cationic initiators, including onium salts , organometallic compounds and pyridinium salts. [ 5 ] As mentioned earlier, one of the drawbacks of the photoinitiators used for photopolymerization is that they tend to absorb in the short UV region . [ 7 ] Photosensitizers, or chromophores , that absorb in a much longer wavelength region can be employed to excite the photoinitiators through an energy transfer. [ 5 ] Other modifications to these types of systems are free radical assisted cationic polymerization. In this case, a free radical is formed from another species in solution that reacts with the photoinitiator in order to start polymerization. Although there are a diverse group of compounds activated by cationic photoinitiators, the compounds that find most industrial uses contain epoxides , oxetanes, and vinyl ethers. [ 1 ] One of the advantages to using cationic photopolymerization is that once the polymerization has begun it is no longer sensitive to oxygen and does not require an inert atmosphere to perform well. [ 1 ]
The proposed mechanism for cationic photopolymerization begins with the photoexcitation of the initiator. Once excited, both homolytic cleavage and dissociation of a counter anion takes place, generating a cationic radical (R), an aryl radical (R') and an unaltered counter anion (X). The abstraction of a lewis acid by the cationic radical produces a very weakly bound hydrogen and a free radical . The acid is further deprotonated by the anion (X) in solution, generating a lewis acid with the starting anion (X) as a counter ion. It is thought that the acidic proton generated is what ultimately initiates the polymerization . [ 9 ]
Since their discovery in the 1970s aryl onium salts , more specifically iodonium and sulfonium salts, have received much attention and have found many industrial applications. Other less common onium salts include ammonium and phosphonium salts. [ 1 ]
A typical onium compound used as a photoinitiator contains two or three arene groups for iodonium and sulfonium respectively. Onium salts generally absorb short wavelength light in the UV region spanning from 225–300 nm. [ 5 ] : 293 One characteristic that is crucial to the performance of the onium photoinitiators is that the counter anion is non- nucleophilic . Since the Brønsted acid generated during the initiation step is considered the active initiator for polymerization , there is a termination route where the counter ion of the acid could act as the nucleophile instead of a functional groups on the oligomer. Common counter anions include BF − 4 , PF − 6 , AsF − 6 and SbF − 6 . There is an indirect relationship between the size of the counter ion and percent conversion.
Although less common, transition metal complexes can act as cationic photoinitiators as well. In general, the mechanism is more simplistic than the onium ions previously described. Most photoinitiators of this class consist of a metal salt with a non-nucleophilic counter anion. For example, ferrocinium salts have received much attention for commercial applications. [ 10 ] The absorption band for ferrocinium salt derivatives are in a much longer, and sometimes visible , region. Upon radiation the metal center loses one or more ligands and these are replaced by functional groups that begin the polymerization . One of the drawbacks of this method is a greater sensitivity to oxygen . There are also several organometallic anionic photoinitiators which react through a similar mechanism. For the anionic case, excitation of a metal center is followed by either heterolytic bond cleavage or electron transfer generating the active anionic initiator . [ 5 ]
Generally pyridinium photoinitiators are N-substituted pyridine derivatives, with a positive charge placed on the nitrogen . The counter ion is in most cases a non-nucleophilic anion. Upon radiation, homolytic bond cleavage takes place generating a pyridinium cationic radical and a neutral free radical . In most cases, a hydrogen atom is abstracted from the oligomer by the pyridinium radical. The free radical generated from the hydrogen abstraction is then terminated by the free radical in solution. This results in a strong pyridinium acid that can initiate polymerization . [ 11 ]
Nowadays, most radical photopolymerization pathways are based on addition reactions of carbon double bonds in acrylates or methacrylates, and these pathways are widely employed in photolithography and stereolithography. [ 12 ]
Before the free radical nature of certain polymerizations was determined, certain monomers were observed to polymerize when exposed to light. The first to demonstrate the photoinduced free radical chain reaction of vinyl bromide was Ivan Ostromislensky , a Russian chemist who also studied the polymerization of synthetic rubber . Subsequently, many compounds were found to become dissociated by light and found immediate use as photoinitiators in the polymerization industry. [ 1 ]
In the free radical mechanism of radiation curable systems, light absorbed by a photoinitiator generates free-radicals which induce cross-linking reactions of a mixture of functionalized oligomers and monomers to generate the cured film [ 13 ]
Photocurable materials that form through the free-radical mechanism undergo chain-growth polymerization , which includes three basic steps: initiation , chain propagation , and chain termination . The three steps are depicted in the scheme below, where R• represents the radical that forms upon interaction with radiation during initiation, and M is a monomer. [ 4 ] The active monomer that is formed is then propagated to create growing polymeric chain radicals. In photocurable materials the propagation step involves reactions of the chain radicals with reactive double bonds of the prepolymers or oligomers. The termination reaction usually proceeds through combination , in which two chain radicals are joined, or through disproportionation , which occurs when an atom (typically hydrogen) is transferred from one radical chain to another resulting in two polymeric chains.
Most composites that cure through radical chain growth contain a diverse mixture of oligomers and monomers with functionality that can range from 2-8 and molecular weights from 500 to 3000. In general, monomers with higher functionality result in a tighter crosslinking density of the finished material. [ 5 ] Typically these oligomers and monomers alone do not absorb sufficient energy for the commercial light sources used, therefore photoinitiators are included. [ 4 ] [ 13 ]
There are two types of free-radical photoinitators: A two component system where the radical is generated through abstraction of a hydrogen atom from a donor compound (also called co-initiator), and a one-component system where two radicals are generated by cleavage . Examples of each type of free-radical photoinitiator is shown below. [ 13 ]
Benzophenone , xanthones , and quinones are examples of abstraction type photoinitiators, with common donor compounds being aliphatic amines. The resulting R• species from the donor compound becomes the initiator for the free radical polymerization process, while the radical resulting from the starting photoinitiator (benzophenone in the example shown above) is typically unreactive.
Benzoin ethers, Acetophenones , Benzoyl Oximes, and Acylphosphines are some examples of cleavage-type photoinitiators. Cleavage readily occurs for the species, giving two radicals upon absorption of light, and both radicals generated can typically initiate polymerization. Cleavage type photoinitiators do not require a co-initiator, such as aliphatic amines. This can be beneficial since amines are also effective chain transfer species. Chain-transfer processes reduce the chain length and ultimately the crosslink density of the resulting film.
The properties of a photocured material, such as flexibility, adhesion, and chemical resistance, are provided by the functionalized oligomers present in the photocurable composite. Oligomers are typically epoxides , urethanes , polyethers , or polyesters , each of which provide specific properties to the resulting material. Each of these oligomers are typically functionalized by an acrylate . An example shown below is an epoxy oligomer that has been functionalized by acrylic acid . Acrylated epoxies are useful as coatings on metallic substrates and result in glossy hard coatings. Acrylated urethane oligomers are typically abrasion resistant, tough, and flexible, making ideal coatings for floors, paper, printing plates, and packaging materials. Acrylated polyethers and polyesters result in very hard solvent resistant films, however, polyethers are prone to UV degradation and therefore are rarely used in UV curable material. Often formulations are composed of several types of oligomers to achieve the desirable properties for a material. [ 4 ]
The monomers used in radiation curable systems help control the speed of cure, crosslink density, final surface properties of the film, and viscosity of the resin. Examples of monomers include styrene , N-Vinylpyrrolidone , and acrylates . Styrene is a low cost monomer and provides a fast cure, N-vinylpyrrolidone results in a material that is highly flexible when cured and has low toxicity, and acrylates are highly reactive, allowing for rapid cure rates, and are highly versatile with monomer functionality ranging from monofunctional to tetrafunctional. Like oligomers, several types of monomers can be employed to achieve the desired properties of the final material. [ 4 ]
Photopolymerization has wide-ranging applications, from imaging to biomedical uses.
Dentistry is one field in which free radical photopolymers have found wide usage as adhesives, sealant composites, and protective coatings. These dental composites are based on a camphorquinone photoinitiator and a matrix containing methacrylate oligomers with inorganic fillers such as silicon dioxide . Resin cements are utilized in luting cast ceramic , full porcelain , and veneer restorations that are thin or translucent, which permits visible light penetration in order to polymerize the cement. Light-activated cements may be radiolucent and are usually provided in various shades since they are utilized in esthetically demanding situations. [ 14 ]
Conventional halogen bulbs , argon lasers and xenon arc lights are currently used in clinical practice. A new technological approach for curing light-activated oral biomaterials using a light curing unit (LCU) is based on blue light-emitting diodes (LED). The main benefits of LED LCU technology are the long lifetime of LED LCUs (several thousand hours), no need for filters or a cooling fan, and virtually no decrease of light output over the lifetime of the unit, resulting in consistent and high quality curing. Simple depth of cure experiments on dental composites cured with LED technology show promising results. [ 15 ]
Photocurable adhesives are also used in the production of catheters , hearing aids , surgical masks , medical filters, and blood analysis sensors. [ 1 ] Photopolymers have also been explored for uses in drug delivery, tissue engineering and cell encapsulation systems. [ 16 ] Photopolymerization processes for these applications are being developed to be carried out in vivo or ex vivo . In vivo photopolymerization would provide the advantages of production and implantation with minimal invasive surgery. Ex vivo photopolymerization would allow for fabrication of complex matrices and versatility of formulation. Although photopolymers show promise for a wide range of new biomedical applications, biocompatibility with photopolymeric materials must still be addressed and developed.
Stereolithography , digital imaging , and 3D inkjet printing are just a few 3D printing technologies that make use of photopolymerization pathways. 3D printing usually utilizes CAD-CAM software, which creates a 3D computer model to be translated into a 3D plastic object. The image is cut in slices; each slice is then reconstructed through radiation curing of the liquid polymer , converting the image into a solid object. Photopolymers used in 3D imaging processes require sufficient cross-linking and should ideally be designed to have minimal volume shrinkage upon polymerization in order to avoid distortion of the solid object. Common monomers utilized for 3D imaging include multifunctional acrylates and methacrylates , often combined with a non-polymeric component in order to reduce volume shrinkage. [ 12 ] A competing composite mixture of epoxide resins with cationic photoinitiators is becoming increasingly used since their volume shrinkage upon ring-opening polymerization is significantly below those of acrylates and methacrylates. Free-radical and cationic polymerizations composed of both epoxide and acrylate monomers have also been employed, gaining the high rate of polymerization from the acrylic monomer, and better mechanical properties from the epoxy matrix. [ 1 ]
Photoresists are coatings, or oligomers , that are deposited on a surface and are designed to change properties upon irradiation of light . These changes either polymerize the liquid oligomers into insoluble cross-linked network polymers or decompose the already solid polymers into liquid products. Polymers that form networks during photopolymerization are referred to as negative resist . Conversely, polymers that decompose during photopolymerization are referred to as positive resists . Both positive and negative resists have found many applications including the design and production of micro-fabricated chips. The ability to pattern the resist using a focused light source has driven the field of photolithography .
As mentioned, negative resists are photopolymers that become insoluble upon exposure to radiation. They have found a variety of commercial applications, especially in the area of designing and printing small chips for electronics. A characteristic found in most negative tone resists is the presence of multifunctional branches on the polymers used. Radiation of the polymers in the presence of an initiator results in the formation of a chemically resistant network polymer . A common functional group used in negative resists is epoxy functional groups. An example of a widely used polymer of this class is SU-8 . SU-8 was one of the first polymers used in this field, and found applications in wire board printing. [ 17 ] In the presence of a cationic photoinitiator photopolymer, SU-8 forms networks with other polymers in solution. Basic scheme shown below.
SU-8 is an example of an intramolecular photopolymerization forming a matrix of cross-linked material. Negative resists can also be made using co- polymerization . In the event that two different monomers , or oligomers , are in solution with multiple functionalities , it is possible for the two to polymerize and form a less soluble polymer.
Manufacturers also use light curing systems in OEM assembly applications such as specialty electronics or medical device applications. [ 18 ]
Exposure of a positive resist to radiation changes the chemical structure such that it becomes a liquid or more soluble. These changes in chemical structure are often rooted in the cleavage of specific linkers in the polymer . Once irradiated, the "decomposed" polymers can be washed away using a developer solvent leaving behind the polymer that was not exposed to light. This type of technology allows the production of very fine stencils for applications such as microelectronics . [ 19 ] In order to have these types of qualities, positive resists utilize polymers with labile linkers in their back bone that can be cleaved upon irradiation, or use a photo-generated acid to hydrolyze bonds in the polymer. A polymer that decomposes upon irradiation to a liquid or more soluble product is referred to as a positive tone resist . Common functional groups that can be hydrolyzed by a photo-generated acid catalyst include polycarbonates and polyesters . [ 20 ]
Photopolymers can be used to generate printing plates, which are then pressed onto paper-like metal type . [ 21 ] This is often used in modern fine printing to achieve the effect of embossing (or the more subtly three-dimensional effect of letterpress printing ) from designs created on a computer without needing to engrave designs into metal or cast metal type. It is often used for business cards. [ 22 ] [ 23 ]
Industrial facilities are utilizing light-activated resin as a sealant for leaks and cracks. Some light-activated resins have unique properties that make them ideal as a pipe repair product. These resins cure rapidly on any wet or dry surface. [ 24 ]
Light-activated resins recently gained a foothold with fly tiers as a way to create custom flies in a short period of time, with very little clean up involved. [ 25 ]
Light-activated resins have found a place in floor refinishing applications, offering an instant return to service not available with any other chemical due to the need to cure at ambient temperatures. Because of application constraints, these coatings are exclusively UV cured with portable equipment containing high intensity discharge lamps. Such UV coatings are now commercially available for a variety of substrates, such as wood, vinyl composition tile and concrete, replacing traditional polyurethanes for wood refinishing and low durability acrylics for VCT .
Washing the polymer plates after they have been exposed to ultra-violet light may result in [ citation needed ] monomers entering the sewer system, [ citation needed ] eventually adding to the plastic content of the oceans. [ citation needed ] Current water purification installations are not able to remove monomer molecules from sewer water. [ citation needed ] Some monomers, such as styrene , are toxic or carcinogenic . | https://en.wikipedia.org/wiki/Photopolymer |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.