text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
Petroleum geochemistry is a branch of geochemistry (the application of chemical concepts to understand geological systems) which deals specifically with petroleum and its origin, generation, and accumulation, as well as its extraction, refinement, and use. [ 1 ] [ 2 ] Petroleum, also known as crude oil, is a solid , liquid , and/or gaesous mix of hydrocarbons . [ 3 ] These hydrocarbons are from the burial and metamorphosis of organic matter from millions of years ago; [ 4 ] the organic matter is from marine animals , plants , and algae . [ 5 ] Petroleum is extracted from the Earth (above or below its surface, depending on the geology of the formation), refined, and used as an energy source. [ 3 ]
Crude oil is most commonly organised into four types - light , heavy , sweet , and sour . [ 6 ] Petroleum is a non-renewable energy source (also known as a " fossil fuel "), so the efficacy of extraction and refining is important for its continued use; multiple techniques are used to detect and to extract crude oil, based on the source rock it is found in and the type of oil itself. [ 1 ]
Petroleum is differentiated into types based on its American Petroleum Institute (API) gravity and by how much sulphur it contains. [ 7 ]
The API gravity of a crude oil is a measurement of purity - i.e., amount of impurities , such as sulphur, nitrogen , or oxygen . [ 8 ] Impurities increase the density of the crude. [ 9 ] [ 6 ]
Light crude oils have higher API gravity figures, due to having fewer impurities. [ 9 ] It is more commonly used to produce diesel and gasoline than heavier oils are. [ 6 ] Due to its lower viscosity , it is easier to extract and to transport. [ 9 ]
Heavy crude oils have lower API gravity figures, and a larger percentage of impurities. [ 9 ] It is used in the making of heavier outputs - e.g., asphalt [ 6 ] - and has a higher viscosity, making it more difficult to transport and extract. [ 9 ]
How 'sweet' or 'sour' a crude oil is is based on the amount of sulphur it contains. [ 6 ]
'Sweet' crude oil has lower sulphur content [ 7 ] - lower than 0.5%. [ 6 ] It can be refined into kerosene, high-quality diesel, and gasoline. [ 6 ]
'Sour' crude oil has high natural sulphur content (at least 0.5%). [ 7 ] Extra treatment is required in the refining process; [ 6 ] impurities are removed to refine the crude into gasoline. [ 9 ] Due to the greater cost associated, it is more commonly refined into fuel oil and diesel - less valuable outputs than products of sweet crude oil. [ 9 ]
The three main hydrocarbon compounds in petroleum are paraffins , naphthenes , and aromatics .
Paraffinic hydrocarbons are part of the alkane series, [ 10 ] and are the most common hydrocarbon found in crude oil. [ 11 ] Paraffins are often a part of gasoline, making them comparatively more valuable. [ 11 ]
Paraffinic hydrocarbons are also known as alkanes, and are represented by the formula C n H 2n+2 , where n is a positive integer. [ 12 ]
Naphthenic hydrocarbons are saturated cyclic hydrocarbons , [ 10 ] and are very important in the refining of liquid crude oil. [ 11 ]
Also known as cyclic alkanes, they are represented by the formula C n H 2n , where n is a positive integer. [ 13 ]
Aromatic hydrocarbons are cyclic, [ 10 ] and are much less abundant than the other two main hydrocarbon compounds. [ 11 ] They are represented by the formula C n H n , where n is a positive integer. [ 14 ]
Techniques are used for finding the source rock (the solid material in which the petroleum is found), as well as the type and amount of the petroleum within. [ 1 ] They are also used to note migration timing and pathways, which are then used to predict when and where petroleum can be found; [ 1 ] petroleum sources can be predicted if material associated with source rock is found. [ 1 ]
Petroleum, or evidence of its immediate occurrence, can be found on the surface of the Earth. Oil seeps can be found near a fault zone, where the movement of Earth's crust can expose petroleum source rock, and thus the crude oil itself. [ 15 ] They can also be found on the ocean floor, and can be found using satellite imaging. [ 16 ]
While not used as commonly as other techniques today, distillation is used in the process of refining petroleum. It involves the dividation of the crude oil into hydrocarbon categories, and products are recovered from the heated material. [ 17 ] A distillation tower is used in separation of the oil, with anywhere between 2 and 300 theoretical plates. [ 16 ]
Similar to the process of distillation, gas-liquid chromatography (typically referred to as gas chromatography, or, more simply, GC) utilises a distillation tower to separate the petroleum. However, compared to distillation's 2 to 300 theoretical plates, gas chromatography includes more than 25,000. This provides a greater degree of separation. [ 16 ]
In order to achieve more complete analyses, gas chromatography is used along with mass spectrometry (to make gas chromatography/mass spectrometry, or GCMS), with infrared spectrometry (to make gas chromatography/infrared spectrometry, or GCIR), and with isotope ratio mass spectrometry (to make gas chromatography/isotope ratio mass spectrometry, or GSIRMS). [ 16 ]
While the crude oil from a petroleum source rock is easily separated using gas chromatography and gas chromatography/mass spectrometry, the organic matter found is not soluble in the solvents used in these techniques, and thus cannot be properly analysed. Pyrolysis is used to characterise kerogens (insoluble hydrocarbons) [ 18 ] and asphaltenes (limited solubility in common solvents). [ 19 ] There are multiple methods of pyrolysis; fingerprinting methods - which use flash pyrolysis or rapid temperature-programmed pyrolysis - involve rapid transfer of the product to the gas chromatography tower. [ 16 ] Rock-Eval is a commonly used process to determine the content of the source rock. [ 20 ] Hydrous pyrolysis is performed within water and in high pressures; this method can simulate different depths of burial, demonstrating the possibilities of the fate of the source rock and the associated patroleum. [ 16 ]
The bulk isotope ratio value of stable isotopes for petroleum depict the average isotopic compositions of the oil's components. Carbon stable isotopes are often used in this method. Whether a sample of petroleum originated in a marine environment or a non-marine environment can be seen using this ratio value, as can method distance and age of the oil. [ 16 ] [ 21 ]
With credit to the previously listed techniques, biomarkers were found in petroleum and source rock extract. These are fossils from organisms, but are closer in size to molecules than to visible hand samples. They display the same structure as their parent biomolecules and are used in the identification of the organic matter from which the petroleum is derived. Biomarkers are also used in correlating oils and source rocks, finding the oil's maturity, regional differences found between multiple samples, and the history of the basin in which the source rock was located. [ 16 ]
Before the use of gas chromatography-mass spectrometry and biomarkers, correlation of locations' geology was used to find how different formations relate to each other and to their environment. Oil-oil correlations (comparing petroleum to other oil found locally or in other areas) and oil-source correlations (comparing petroleum and its source) were performed; infrared spectrometry, refractive indices, solvent extractable organic matter, compound class distribution, and elemental analysis are all methods of doing oil-source correlations. | https://en.wikipedia.org/wiki/Petroleum_geochemistry |
Petroleum jelly , petrolatum ( / ˌ p ɛ t r ə ˈ l eɪ t ə m / ), white petrolatum , soft paraffin , or multi-hydrocarbon , CAS number 8009-03-8, is a semi-solid mixture of hydrocarbons (with carbon numbers mainly higher than 25), [ 1 ] originally promoted as a topical ointment for its healing properties. [ 2 ] Vaseline has been the leading brand of petroleum jelly since 1870.
After petroleum jelly became a medicine-chest staple, consumers began to use it for cosmetic purposes and for many ailments including toenail fungus , genital rashes (non- STI ), nosebleeds , diaper rash , and common colds . Its folkloric medicinal value as a " cure-all " has since been limited by a better scientific understanding of appropriate and inappropriate uses. It is recognized by the U.S. Food and Drug Administration (FDA) as an approved over-the-counter (OTC) skin protectant and remains widely used in cosmetic skin care, where it is often loosely referred to as mineral oil .
Marco Polo in 1273 described the oil exportation of Baku oil by hundreds of camels and ships for burning and as an ointment for treating mange . [ 3 ]
Native Americans discovered the use of petroleum jelly for protecting and healing the skin. [ 4 ] Sophisticated oil pits had been built as early as 1415–1450 in Western Pennsylvania . [ 5 ] In 1859, workers operating the United States 's first oil rigs noticed a paraffin -like material forming on rigs in the course of investigating malfunctions. Believing the substance hastened healing, the workers used the jelly on cuts and burns. [ 6 ] [ 7 ]
Robert Chesebrough , a young chemist whose previous work of distilling fuel from the oil of sperm whales had been rendered obsolete by petroleum , went to Titusville, Pennsylvania , to see what new materials had commercial potential. Chesebrough took the unrefined green-to-gold-colored "rod wax", as the drillers called it, back to his laboratory to refine it and explore potential uses. He discovered that by distilling the lighter, thinner oil products from the rod wax, he could create a light-colored gel. Chesebrough patented the process of making petroleum jelly by U.S. patent 127,568 in 1872. The process involved vacuum distillation of the crude material followed by filtration of the still residue through bone char . Chesebrough traveled around New York demonstrating the product to encourage sales by burning his skin with acid or an open flame, then spreading the ointment on his injuries and showing his past injuries healed, he said, by his miracle product. He opened his first factory in 1870 in Brooklyn using the name Vaseline . [ 6 ]
Petroleum jelly is a mixture of hydrocarbons, with a melting point that depends on the exact proportions. The melting point is typically between 40 and 70 °C (105 and 160 °F). [ 8 ] [ 9 ] It is flammable only when heated to liquid; then the fumes will light, not the liquid itself, so a wick material is needed to ignite petroleum jelly. It is colorless (or of a pale yellow color when not highly distilled), translucent , and devoid of taste and smell when pure. It does not oxidize on exposure to the air and is not readily acted on by chemical reagents. It is insoluble in water. It is soluble in dichloromethane , chloroform , benzene , diethyl ether , carbon disulfide and turpentine . [ 1 ] [ 10 ] Petroleum jelly is slightly soluble in alcohol. [ 11 ] It acts as a plasticizer on polypropylene (PP), [ 12 ] but is compatible with a wide range of materials and chemicals. [ 13 ] It is a semi-solid , in that it holds its shape indefinitely like a solid, but it can be forced to take the shape of its container without breaking apart, like a liquid, though it does not flow on its own. At room temperature, it has 20.9% solid fat content. Its microstructure is made up of partially crystalline stacks of lamellar sheets which immobilize the liquid portion. [ 14 ] In general, only 7–13% of it is made up of high molecular weight paraffins, 30–45% of smaller paraffins, and 48–60% of small paraffins. [ 15 ]
Depending on the specific application of petroleum jelly, it may be USP , B.P. , or Ph. Eur. grade. This pertains to the processing and handling of the petroleum jelly so it is suitable for medicinal and personal-care applications.
Petroleum jelly has lubricating and coating properties, including use on dry lips and dry skin. Below are some examples of the uses of petroleum jelly.
Vaseline brand First Aid Petroleum Jelly, or carbolated petroleum jelly containing phenol to give the jelly additional antibacterial effect, has been discontinued. [ 16 ]
During World War II , a variety of petroleum jelly called red veterinary petrolatum , or Red Vet Pet for short, was often included in life raft survival kits. Acting as a sunscreen , it provides protection against ultraviolet rays. [ 17 ]
The American Academy of Dermatology recommends keeping skin injuries moist with petroleum jelly to reduce scarring. [ 18 ] A verified medicinal use is to protect and prevent moisture loss of the skin of a patient in the initial post-operative period following laser skin resurfacing. [ 19 ] [ 20 ]
Petroleum jelly is used extensively by otorhinolaryngologists—ear, nose, and throat doctors—for nasal moisture and epistaxis treatment, and to combat nasal crusting. Large studies have found petroleum jelly applied to the nose for short durations to have no significant side effects. [ 21 ] [ 22 ] [ 23 ]
Historically, it was also consumed for internal use and even promoted as "Vaseline confection". [ 24 ] [ 25 ]
Most petroleum jelly today is used as an ingredient in skin lotions and cosmetics, providing various types of skin care and protection by minimizing friction or reducing moisture loss, or by functioning as a grooming aid (e.g., pomade ). It is also used for treating dry scalp and dandruff. [ 26 ] Although long known as just an occlusive, recent studies show that it is actually able to penetrate into the stratum corneum and helps in better absorption of other cosmetic products. Applying a significant amount of petroleum jelly onto one's face before bed is known as "slugging". [ 27 ]
By reducing the loss of moisture via transepidermal water loss , petroleum jelly can prevent chapped hands and lips , and soften nail cuticles .
This property is exploited to provide heat insulation: petroleum jelly can be used to keep swimmers warm in water when training, or during channel crossings or long ocean swims. It can prevent chilling of the face due to evaporation of skin moisture during cold weather outdoor sports. [ 28 ]
In the first part of the twentieth century, petroleum jelly, either pure or as an ingredient, was also popular as a hair pomade . When used in a 50/50 mixture with pure beeswax , it makes an effective moustache wax . [ 29 ] [ 30 ]
Petroleum jelly can be used to reduce the friction between skin and clothing during various sport activities, for example to prevent chafing of the seat region of cyclists, or the nipples of long distance runners wearing loose T-shirts, and is commonly used in the groin area of wrestlers and footballers .
Petroleum jelly is commonly used as a personal lubricant , because it does not dry out like water-based lubricants, and has a distinctive "feel", different from that of K-Y and related methylcellulose products. However, it is not recommended for use with latex condoms during sexual activity, as it increases the chance of rupture. [ 31 ] In addition, petroleum jelly is difficult for the body to break down naturally, and may cause vaginal health problems when used for intercourse. [ 30 ]
Petroleum jelly can be used to coat corrosion-prone items such as metallic trinkets, non-stainless steel blades, and gun barrels prior to storage as it serves as an excellent and inexpensive water repellent. It is used as an environmentally friendly underwater antifouling coating for motor boats and sailing yachts. It was recommended in the Porsche owner's manual as a preservative for light alloy (alleny) anodized Fuchs wheels to protect them against corrosion from road salts and brake dust. [ 32 ]
It can be used to finish and protect wood, much like a mineral oil finish. It is used to condition and protect smooth leather products like bicycle saddles, boots, motorcycle clothing, and used to put a shine on patent leather shoes [ 33 ] [ 30 ] (when applied in a thin coat and then gently buffed off).
Petroleum jelly can be used to lubricate zippers and slide rules . It was also recommended by Porsche in maintenance training documentation for lubrication (after cleaning) of "Weatherstrips on Doors, Hood, Tailgate, Sun Roof". [ 34 ] It is used in bullet lubricant compounds. [ 35 ]
Petroleum jelly is a useful material when incorporated into candle wax formulas. It softens the overall blend, allows the candle to incorporate additional fragrance oil, and facilitates adhesion to the sidewall of the glass. Petroleum jelly is used to moisten nondrying modelling clay such as plasticine , as part of a mix of hydrocarbons including those with greater ( paraffin wax ) and lesser ( mineral oil ) molecular weights. It is used as a tack reducer additive to printing inks to reduce paper lint "picking" from uncalendered paper stocks. It can be used as a release agent for plaster molds and castings. It is used in the leather industry as a waterproofing cream. [ 28 ] [ 30 ]
Petroleum jelly can be mixed with a high proportion of strong inorganic chlorates due to it acting as a plasticizer and a fuel source. An example of this is Cheddite C which consists of a ratio of 9:1, KClO 3 to petroleum jelly. This mixture is unable to detonate without the use of a blasting cap . It is also used as a stabiliser in the manufacture of the propellant Cordite . [ 30 ]
Petroleum jelly can be used to fill copper or fibre-optic cables using plastic insulation to prevent the ingress of water, see icky-pick .
Petroleum jelly can be used to coat the inner walls of terrariums to prevent animals from crawling out to escape.
A stripe of petroleum jelly can be used to prevent the spread of a liquid (retain or confine a liquid to a specific area). For example, it can be applied close to the hairline when using a home hair dye kit to prevent the hair dye from irritating or staining the skin. It is also used to prevent diaper rash . [ 30 ]
Petroleum jelly is sometimes used to protect the terminals on batteries. [ 36 ] However, automobiles batteries require a silicone-based battery grease because it is less likely to melt and thus offers better protection. [ 37 ] [ 38 ]
Petroleum jelly is used to gently clean a variety of surfaces, ranging from makeup removal from faces to tar stain removal from leather.
Petroleum jelly is used to moisturize the paws of dogs. [ 39 ] It is a common ingredient in hairball remedies for domestic cats. [ 40 ] [ 41 ]
Some goalkeepers in association football put petroleum jelly on their gloves to make them stickier. [ 42 ]
Petroleum jelly contains mineral oil aromatic hydrocarbons (MOAH) . Many MOAH, mainly polycyclic aromatic hydrocarbons (PAH), are considered carcinogenic. The content of both MOAH and PAH in petroleum jelly products varies. The EU limits PAH content in cosmetics to 0.005%. The risks of PAH exposure through cosmetics have not been comprehensively studied, but food products with low levels (<3%) are not considered carcinogenic (by the EU). [ 43 ]
A 2012 scientific opinion by the European Food Safety Authority stated that mineral oil aromatic hydrocarbons (MOAH) and polyaromatics were potentially carcinogenic and may present a health risk. [ 44 ]
In 2015, German consumer watchdog Stiftung Warentest analyzed cosmetics containing mineral oils, finding significant concentrations of MOAH and polyaromatics in products containing mineral oils. [ 45 ] Vaseline products contained the most MOAH of all tested cosmetics (up to 9%). [ 45 ] Based on the 2015 results, Stiftung Warentest warned consumers not to use Vaseline or any product that is based on mineral oils for lip care. [ 45 ]
A study published in 2017 found levels of MOAH levels to be up to 1% in petroleum jelly and likewise to be less than 1% in petroleum jelly-based beauty products. [ 46 ] | https://en.wikipedia.org/wiki/Petroleum_jelly |
Petroleum microbiology is a branch of microbiology that deals with the study of microorganisms that can metabolize or alter crude or refined petroleum products . These microorganisms, also called hydrocarbonoclastic microorganisms, can degrade hydrocarbons and, include a wide distribution of bacteria, methanogenic archaea , and some fungi . Not all hydrocarbonoclasic microbes depend on hydrocarbons to survive, but instead may use petroleum products as alternative carbon and energy sources. Interest in this field is growing due to the increasing use of bioremediation of oil spills . [ 1 ] [ 2 ] [ 3 ]
Bioremediation of oil contaminated soils, marine waters and oily sludges in situ is a feasible process as hydrocarbon degrading microorganisms are ubiquitous and are able to degrade most compounds in petroleum oil. In the simplest case, indigenous microbial communities can degrade the petroleum where the spill occurs. In more complicated cases, various methods of adding nutrients, air, or exogenous microorganisms to the contaminated site can be applied. [ 4 ] For example, bioreactors involve the application of both natural and additional microorganisms in controlled growth conditions that yields high biodegradation rates and can be used with a wide range of media. [ 4 ]
Crude oils are composed of an array of chemical compounds, minor constituents, and trace metals. Making up 50-98% of these petroleum products are hydrocarbons with saturated, unsaturated, or aromatic structures which influence their biodegradability by hydronocarbonclasts. [ 5 ] The rate of uptake and biodegradation by these hydrocarbon-oxidizing microbes not only depend on the chemical structure of the substrates, but is limited by biotic and abiotic factors such as temperature, salinity, and nutrient availability in the environment. [ 6 ] [ 7 ]
A model microorganism studied for its role in bioremediation of oil-spill sites and hydrocarbon catabolism is the alpha-proteobacteria Alcanivorax , which degrades aliphatic alkanes through various metabolic activities. [ 6 ] Alcanivorax borkumensis utilizes linear hydrocarbon chains in petroleum as its primary energy source under aerobic conditions. When further supplied with sufficient limiting nutrients such as nitrogen and phosphor, it grows and produces surfactant glucolipids to help reduce surface water tension and enhance hydrocarbon uptake.[5] For this reason, nitrates and phosphates are often commercially added to oil-spill sites to engage quiescent populations of A. borkumensis , allowing them to quickly outcompete other microbial populations and become the dominant species in the oil-infested environment. [ 8 ] [ 9 ]
The addition of rate-limiting nutrients promotes the microbe's biodegrading pathways, including upregulation of genes encoding multiple alkane hydroxylases that oxidize various lengths of linear alkanes. [ 10 ] These enzymes essentially remove the problematic hydrocarbon constituents of petroleum oil while A. borkumensis simultaneously increases synthesis of anionic glucoproteins, which are used to emulsify hydrocarbons in the environment and increase their bioavailability. [ 10 ] The presence of crude oil along with appropriate levels of nitrogen and phosphor catalyzes the removal of petroleum either by mechanisms that enhance the efficiency of substrate uptake or by direct biodegradation of aliphatic chains.
Two well-known oil spills exemplify large scale marine bioremediation applications:
In 1989, the Exxon Valdez ran aground, spilling 41.6 million liters of crude oil, and launching one of the first major bioremediation efforts for an oil spill. Cleanup of Alaskan shorelines relied in part on fertilizer application to augment bacterial growth. [ 11 ]
In 2010, the BP Deepwater Horizon oil spill released 779 million liters of oil into the Gulf of Mexico. This was the largest oil spill of all time and indigenous petroleum microorganisms played a major role in petroleum degradation and cleanup. [ 12 ]
These are microbial-synthesized surface-active substances that allow for more efficient microbial biodegradation of hydrocarbons in bioremediation processes. There are two ways by which biosurfactants are involved in bioremediation. (1) Increase the surface area of hydrophobic water-insoluble substrates. Growth of microbes on hydrocarbons can be limited by available surface area of the water-oil interface. Emulsifiers produced by microbes can break up oil into smaller droplets, effectively increasing the available surface area. (2) Increase the bioavailability of hydrophobic water-insoluble substrates. Biosurfactants can enhance the availability of bound substrates by desorbing them from surfaces (e.g. soil) or by increasing their apparent solubility. Some biosurfactants have low critical micelle concentrations (CMCs), a property which increases the apparent solubility of hydrocarbons by sequestering hydrophobic molecules into the centres of micelles . [ 13 ]
Microbial enhanced oil recovery (MEOR) is a technology in which microbial environments are manipulated to enhance oil recovery. Nutrients are injected in situ into porous media and indigenous or added microbes promote growth and/or generate products that mobilize oil into producing wells. Alternatively, oil-mobilizing products can be produced by fermentation and injected into the reservoir. Various products and microorganisms are useful in these applications and each will yield different results. The two general strategies for enhancing oil recovery are altering the surface properties of the interface and using bioclogging to change the flow behavior. [ 14 ] Biomass , biosurfactants , biopolymers , solvents , acids, and gases are some of the products that are added to oil reservoirs to enhance recovery. [ 4 ] Other resources for this application: [ 15 ] [ 16 ]
Microbial biosensors identify and quantify target compounds of interest through interactions with the microbes. For example, bacteria may be used to identify a pollutant by monitoring their response to the specific chemical. The biosensor system may simply use bacterial growth as a pollutant indicator, or rely on genetic assays wherein a reporter gene is induced by the chemical.
Many analytical techniques require expensive treatment of soil samples and/or expensive equipment to detect the presence of pollutants. Bacterial biosensor systems offer the potential for cheap, robust detection systems that are selective and highly sensitive. One developed system uses Pseudomonas fluorescens HK44 to quantitatively assay for naphthalene using bioluminescence . [ 17 ]
Often in the process of degrading a pollutant, a microbe can create intermediates or byproducts that are also harmful, sometimes even more harmful than the original substrate . For example, some microbes produce hydrogen sulfide as a byproduct in the degradation of certain petroleum hydrocarbons and if those gases are not detoxified before escaping the system, they can be released into the atmosphere. [ 18 ]
The pathways of degradation of different petroleum products vary depending on the substrate and the microorganism (i.e. aerobic/anaerobic). Specific degradation pathways of many hydrocarbon compounds can be found on the University of Minnesota Biocatalysis/Biodegradation Database . | https://en.wikipedia.org/wiki/Petroleum_microbiology |
Petroleum naphtha is an intermediate hydrocarbon liquid stream derived from the refining of crude oil [ 1 ] [ 2 ] [ 3 ] with CAS -no 64742-48-9. [ 4 ] It is most usually desulfurized and then catalytically reformed , which rearranges or restructures the hydrocarbon molecules in the naphtha as well as breaking some of the molecules into smaller molecules to produce a high- octane component of gasoline (or petrol ).
There are hundreds of different petroleum crude oil sources worldwide and each crude oil has its own unique composition or assay . There are also hundreds of petroleum refineries worldwide and each of them is designed to process either a specific crude oil or specific types of crude oils. Naphtha is a general term as each refinery produces its own naphthas with their own unique initial and final boiling points and other physical and compositional characteristics.
Naphthas may also be produced from other material such as coal tar , shale deposits, tar sands , and the destructive distillation of wood. [ 5 ] [ 6 ]
The first unit operation (after being desalinated) in a petroleum refinery is the crude oil distillation unit . The overhead liquid distillate from that unit is called virgin or straight-run naphtha and that distillate is the largest source of naphtha in most petroleum refineries. The naphtha is a mixture of many different hydrocarbon compounds. It has an initial boiling point (IBP) of about 35 °C and a final boiling point (FBP) of about 200 °C, and it contains paraffins , naphthenes (cyclic paraffins) and aromatic hydrocarbons ranging from those containing 4 carbon atoms to those containing about 10 or 11 carbon atoms.
The virgin naphtha is often further distilled into two streams: [ 7 ]
The virgin heavy naphtha is usually processed in a catalytic reformer, because the light naphtha has molecules with six or fewer carbon atoms—which, when reformed, tend to crack into butane and lower molecular weight hydrocarbons that are not useful as high-octane gasoline blending components. Also, the molecules with six carbon atoms tend to form aromatics, which is undesirable because the environmental regulations of a number of countries limit the amount of aromatics (most particularly benzene ) in gasoline. [ 8 ] [ 9 ] [ 10 ]
The table below lists some typical virgin heavy naphthas, available for catalytic reforming, derived from various crude oils. It can be seen that they differ significantly in their content of paraffins, naphthenes and aromatics:
Some refinery naphthas also contain some olefinic hydrocarbons, such as naphthas derived from the fluid catalytic cracking , visbreakers and coking processes used in many refineries. Those olefin-containing naphthas are often referred to as cracked naphthas.
In some petroleum refineries, the cracked naphthas are desulfurized and catalytically reformed (as are the virgin naphthas) to produce additional high-octane gasoline components.
Some petroleum refineries also produce small amounts of specialty naphthas for use as solvents, cleaning fluids and dry-cleaning agents, paint and varnish diluents, asphalt diluents, rubber industry solvents, recycling products, and cigarette-lighter , portable-camping-stove and lantern fuels. Those specialty naphthas are subjected to various purification processes which adjusts chemical characteristics to suit specific needs.
Specialty naphtha comes in many varieties and each are referred to by separate names such as petroleum ether , petroleum spirits , mineral spirits , paraffin , benzine , hexane , ligroin , white oil or white gas , painters naphtha , refined solvent naphtha and Varnish makers' & painters' naphtha (VM&P) . The best way to determine the boiling point and other compositional characteristics of any of the specialty naphtha is to read the Safety Data Sheet (SDS) for the specific naphtha of interest. Safety Data Sheets can be found on a chemical suppliers websites or by contacting the supplier directly.
On a much larger scale, petroleum naphtha is also used in the petrochemicals industry as feedstock to steam reformers and steam crackers for the production of hydrogen (which may be and is converted into ammonia for fertilizers), ethylene , and other olefins. Natural gas is also used as feedstock to steam reformers and steam crackers.
People can be exposed to petroleum naphtha in the workplace by breathing it, swallowing it, skin contact, and eye contact. The Occupational Safety and Health Administration (OSHA) set the legal limit ( permissible exposure limit ) for petroleum naphtha exposure in the workplace as 500 ppm (2000 mg/m 3 ) over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 350 mg/m 3 over an 8-hour workday and 1800 mg/m 3 over 15 minutes. At levels of 1100 ppm, 10% of the lower explosive limit, petroleum naphtha is immediately dangerous to life and health . [ 15 ] | https://en.wikipedia.org/wiki/Petroleum_naphtha |
Petroleum products are materials derived from crude oil ( petroleum ) as it is processed in oil refineries . Unlike petrochemicals , which are a collection of well-defined usually pure organic compounds, petroleum products are complex mixtures. [ 1 ] Most petroleum is converted into petroleum products, which include several classes of fuels. [ 2 ]
According to the composition of the crude oil and depending on the demands of the market, refineries can produce different shares of petroleum products. The largest share of oil products is used as "energy carriers", i.e. various grades of fuel oil and gasoline . These fuels include or can be blended to give gasoline, jet fuel , diesel fuel , heating oil , and heavier fuel oils. Heavier (less volatile ) fractions can also be used to produce asphalt , tar , paraffin wax , lubricating and other heavy oils. Refineries also produce other chemicals , some of which are used in chemical processes to produce plastics and other useful materials. Since petroleum often contains a few percent sulfur -containing molecules, elemental sulfur is also often produced as a petroleum product. Carbon , in the form of petroleum coke , and hydrogen may also be produced as petroleum products. The hydrogen produced is often used as an intermediate product for other oil refinery processes such as hydrocracking and hydrodesulfurization .
Oil refineries will blend various feedstocks, mix appropriate additives, provide short-term storage, and prepare for bulk loading to trucks, barges, product ships, and railcars. [ 4 ]
Over 6,000 items are made from petroleum waste by-products, including: fertilizer , flooring (floor covering), perfume , insecticide , petroleum jelly , soap , vitamins and some essential amino acids . [ 5 ] | https://en.wikipedia.org/wiki/Petroleum_product |
Petroleum production engineering is a subset of petroleum engineering .
Petroleum production engineers design and select subsurface equipment to produce oil and gas well fluids . [ 1 ] They often are degreed as petroleum engineers , although they may come from other technical disciplines (e.g., mechanical engineering , chemical engineering , physicist ) and subsequently be trained by an oil and gas company.
Petroleum production engineers' responsibilities include:
Note: Surface equipments are designed by Chemical engineers and Mechanical engineers according to data provided by the production engineers.
Outflow should be defined as flow from the casing perforations to the surface facilities. | https://en.wikipedia.org/wiki/Petroleum_production_engineering |
A hydrocarbon resin is a C5/C9 aromatic hydrocarbon that is used in industrial applications. It has a tackifying effect and is suitable for use in paint , printing ink , adhesives , rubber and other areas where tackiness is required. [ 1 ]
Generally, the petroleum resins are not used independently, but have to be used together with other types of resins as promoters, adjusting agents and modifiers in hot-melt adhesive , pressure-sensitive adhesive, hot melt road marking paint , [ 2 ] rubber tires etc.
There are various types of hydrocarbon resins, including C5 Resins, C9 Resins, C5/C9 copolymer resins and hydrogenated resins. C5 Resins are produced from aliphatic crackers like piperylene and isoprene , the current major catalyst is AlCl 3 . C9 Resins are produced from aromatic crackers like vinyltoluenes , Indene , alpha-methylstyrene , styrene , methylidenes , etc. The current major catalyst is BF 3 . C5/C9 copolymer resins are produced from both aliphatic crackers and aromatic crackers. There are some additional processes like hydrogenated (use hydrogen) for hydrocarbon resins. By this way, the double bond is neutralized and light colors, even water white resins, are produced. There are some different types, including hydrogenated C5 Resins, hydrogenated C9 Resins, hydrogenated C5/C9 resins, and hydrogenated DCPD resins. [ 3 ]
This article about a hydrocarbon is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Petroleum_resin |
The petrolingual ligament lies at the posteroinferior aspect of the lateral wall of the cavernous sinus and marks the point at which the internal carotid artery enters the cavernous sinus .
Anatomically, the petrolingual ligament demarcates two of the segments of the internal carotid artery:
For surgeons and radiologists , it is important to be oriented to the location of this ligament in cases of possible dissection of the internal carotid artery, as it helps determine whether the dissection has occurred inside or outside the cavernous sinus. [ 1 ]
This anatomy article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Petrolingual_ligament |
Petrophysics (from the Greek πέτρα, petra , "rock" and φύσις, physis , "nature") is the study of physical and chemical rock properties and their interactions with fluids . [ 1 ]
A major application of petrophysics is in studying reservoirs for the hydrocarbon industry . Petrophysicists work together with reservoir engineers and geoscientists to understand the porous media properties of the reservoir. Particularly how the pores are interconnected in the subsurface, controlling the accumulation and migration of hydrocarbons . [ 1 ] Some fundamental petrophysical properties determined are lithology , porosity , water saturation , permeability , and capillary pressure . [ 1 ]
The petrophysicists workflow measures and evaluates these petrophysical properties through well-log interpretation (i.e. in-situ reservoir conditions) and core analysis in the laboratory. During well perforation , different well-log tools are used to measure the petrophysical and mineralogical properties through radioactivity and seismic technologies in the borehole. [ 2 ] In addition, core plugs are taken from the well as sidewall core or whole core samples. These studies are combined with geological, geophysical, and reservoir engineering studies to model the reservoir and determine its economic feasibility.
While most petrophysicists work in the hydrocarbon industry, some also work in the mining , water resources , geothermal energy , and carbon capture and storage industries. Petrophysics is part of the geosciences , and its studies are used by petroleum engineering , geology , geochemistry , exploration geophysics and others. [ 3 ]
The following are the fundamental petrophysical properties used to characterize a reservoir:
The rock's mechanical or geomechanical properties are also used within petrophysics to determine the reservoir strength , elastic properties , hardness , ultrasonic behaviour , index characteristics and in situ stresses . [ 6 ]
Petrophysicists use acoustic and density measurements of rocks to compute their mechanical properties and strength . They measure the compressional (P) wave velocity of sound through the rock and the shear (S) wave velocity and use these with the density of the rock to compute the rock's compressive strength , which is the compressive stress that causes a rock to fail, and the rocks' flexibility , which is the relationship between stress and deformation for a rock. [ 12 ] Converted-wave analysis is also determines the subsurface lithology and porosity. [ 13 ]
Geomechanics measurements are useful for drillability assessment, wellbore and open-hole stability design, log strength and stress correlations, and formation and strength characterization. [ 6 ] These measurements are also used to design dams, roads, foundations for buildings, and many other large construction projects. [ 14 ] They can also help interpret seismic signals from the Earth, either manufactured seismic signals or those from earthquakes. [ 15 ]
As core samples are the only evidence of the reservoir's formation rock structure, the Core analysis is the "ground truth" data measured at laboratory to comprehend the key petrophysical features of the in-situ reservoir. In the petroleum industry, rock samples are retrieved from the subsurface and measured by oil or service companies' core laboratories. This process is time-consuming and expensive; thus, it can only be applied to some of the wells drilled in a field. Also, proper design, planning and supervision decrease data redundancy and uncertainty. Client and laboratory teams must work aligned to optimise the core analysis process. [ 6 ]
Well Logging is a relatively inexpensive method to obtain petrophysical properties downhole. Measurement tools are conveyed downhole using either wireline or LWD method. [ 2 ]
An example of wireline logs is shown in Figure 1. The first “track” shows the natural gamma radiation level of the rock. The gamma radiation level “log” shows increasing radiation to the right and decreasing radiation to the left. The rocks emitting less radiation have more yellow shading. The detector is very sensitive, and the amount of radiation is very low. In clastic rock formations, rocks with smaller amounts of radiation are more likely to be coarser-grained and have more pore space, while rocks with higher amounts of radiation are more likely to have finer grains and less pore space. [ 16 ]
The second track in the plot records the depth below the reference point, usually the Kelly bush or rotary table in feet, so these rock formations are 11,900 feet below the Earth's surface.
In the third track, the electrical resistivity of the rock is presented. The water in this rock is salty. The electrolytes flowing inside the pore space within the water conduct electricity resulting in lower resistivity of the rock. This also indicates an increased water saturation and decreased hydrocarbon saturation. [ 17 ]
The fourth track shows the computed water saturation, both as “total” water (including the water bound to the rock) in magenta and the “effective water” or water that is free to flow in black. Both quantities are given as a fraction of the total pore space.
The fifth track shows the fraction of the total rock that is pore space filled with fluids (i.e. porosity). The display of the pore space is divided into green for oil and blue for movable water. The black line shows the fraction of the pore space, which contains either water or oil that can move or be "produced" (i.e. effective porosity). While the magenta line indicates the toral porosity, meaning that it includes the water that is permanently bound to the rock.
The last track represents the rock lithology divided into sandstone and shale portions. The yellow pattern represents the fraction of the rock (excluding fluids) composed of coarser-grained sandstone. The gray pattern represents the fraction of rock composed of finer-grained, i.e. "shale." The sandstone is the part of the rock that contains the producible hydrocarbons and water.
Reservoir models are built by reservoir engineering in specialised software with the petrophysical dataset elaborated by the petrophysicist to estimate the amount of hydrocarbon present in the reservoir, the rate at which that hydrocarbon can be produced to the Earth's surface through wellbores and the fluid flow in rocks. [ 3 ] Similar models in the water resource industry compute how much water can be produced to the surface over long periods without depleting the aquifer . [ 18 ]
Shaly sand is a term referred to as a mixture of shale or clay and sandstone. Hence, a significant portion of clay minerals and silt-size particles results in a fine-grained sandstone with higher density and rock complexity. [ 19 ]
The shale/clay volume is an essential petrophysical parameter to estimate since it contributes to the rock bulk volume, and for correct porosity and water saturation, evaluation needs to be correctly defined. As shown in Figure 2, for modelling clastic rock formation, there are four components whose definitions are typical for shaly or clayey sands that assume: the rock matrix (grains), clay portion that surrounds the grains, water, and hydrocarbons. These two fluids are stored only in pore space in the rock matrix.
Due to the complex microstructure, for a water-wet rock, the following terms comprised a clastic reservoir formation:
V ma = volume of matrix grains.
V dcl = volme of dry clay.
V cbw = volume of clay bound water.
V cl = volume of wet clay ( V dcl + V cbw ).
V cap = volume of capillary bound water.
V fw = volume of free water.
V hyd = volume of hydrocarbon.
Φ T = Total porosity (PHIT), which includes the connected and not connected pore throats.
Φ e = Effective porosity which includes only the inter-connected pore throats.
V b = bulk volume of the rock.
Key equations:
V ma + V cl + V fw + V hyd = 1
Rock matrix volume + wet clay volume + water free volume + hydrocarbon volume = bulk rock volume [ 20 ]
The Society of Petrophysicists and Well Log Analysts (SPWLA) is an organisation whose mission is to increase the awareness of petrophysics, formation evaluation , and well logging best practices in the oil and gas industry and the scientific community at large. [ 21 ] | https://en.wikipedia.org/wiki/Petrophysics |
Petrosix is the world's largest surface oil shale pyrolysis retort with an 11 metres (36 ft) diameter vertical shaft kiln , operational since 1992. It is located in São Mateus do Sul , Brazil , and it is owned and operated by the Brazil energy company Petrobras . Petrosix means also the Petrosix process, an externally generated hot gas technology of shale oil extraction . The technology is tailored to Irati oil shale formation, a Permian formation of the Paraná Basin .
Petrobras started oil shale processing activities in 1953 by developing Petrosix technology for extracting oil from oil shale of the Irati formation. A 5.5 metres (18 ft) inside diameter semi-works retort (the Irati Profile Plant) with capacity of 2,400 tons per day, was brought on line in 1972, and began limited commercial operation in 1980. The first retort that used current Petrosix technology was a 0.2 metres (0.7 ft) internal diameter retort pilot plant started in 1982. It was followed by a 2 metres (6.6 ft) retort demonstration plant in 1984. A 11 metres (36 ft) retort was brought into service in December 1991, and commercial production started in 1992. The company operates two retorts which process 8,500 tons of oil shale daily. [ 1 ] [ 2 ]
The Petrosix 11 metres (36 ft) vertical shaft retort is the world's largest operational surface oil shale pyrolysis reactor. [ 1 ] [ 3 ] It was designed by Cameron Engineers . The retort has the upper pyrolysis section and lower shale coke cooling section. The retort capacity is 6,200 tons of oil shale per day, and it yields a nominal daily output of 3,870 barrels of shale oil (i.e., 550 tons of oil, approximately 1 ton of oil per 11 tons of shale), 132 tons of oil shale gas , 50 tons of liquefied oil shale gas, and 82 tons of sulfur. [ 1 ] [ 2 ]
Petrosix is one of four technologies of shale oil extraction in commercial use. [ 2 ] It is an above-ground retorting technology, which uses externally generated hot gas for the oil shale pyrolysis. [ 4 ] After mining, the shale is transported by trucks to a crusher and screens, where it is reduced to particles (lump shale). These particles are between 12 millimetres (0.5 in) and 75 millimetres (3.0 in) and have an approximately parallelepipedic shape. [ 5 ] These particles are transported on a belt to a vertical cylindrical vessel, where the shale is heated up to about 500 °C (932 °F) for pyrolysis. [ 2 ] Oil shale enters through the top of the retort while hot gases are injected into the middle of the retort. The oil shale is heated by the gases as it moves down. As a result, the kerogen in the shale decomposes to yield oil vapor and more gas. Cold gas is injected into the bottom of the retort to cool and recover heat from the spent shale . Cooled spent shale is discharged through a water seal with drag conveyor below the retort. Oil mist and cooled gases are removed through the top of the retort and enter a wet electrostatic precipitator where the oil droplets are coalesced and collected. The gas from the precipitator is compressed and split into three parts. [ 6 ]
One part of the compressed retort gas is heated in a furnace to 600 °C (1,112 °F) and recirculated back to the middle of the retort for heating and pyrolyzing the oil shale, and another part is circulated cold into the bottom of the retort, where it cools down the spent shale, heats up itself, and ascends into the pyrolysis section as a supplementary heat source for heating the oil shale. The third part undergoes further cooling for light oil (naphtha) and water removal and then sent to the gas treatment unit, where fuel gas and liquefied petroleum gas (LPG) are produced and sulfur recovered. [ 7 ]
One drawback of this process is that the potential heat from the combustion of the char contained in the shale is not utilized. [ 2 ] Also oil shale particles smaller than 12 millimetres (0.5 in) can not be processed in the Petrosix retort. These fines may account for 10 to 30 per cent of the crushed feed. | https://en.wikipedia.org/wiki/Petrosix |
In differential geometry and theoretical physics , the Petrov classification (also known as Petrov–Pirani–Penrose classification) describes the possible algebraic symmetries of the Weyl tensor at each event in a Lorentzian manifold .
It is most often applied in studying exact solutions of Einstein's field equations , but strictly speaking the classification is a theorem in pure mathematics applying to any Lorentzian manifold, independent of any physical interpretation. The classification was found in 1954 by A. Z. Petrov and independently by Felix Pirani in 1957.
We can think of a fourth rank tensor such as the Weyl tensor , evaluated at some event , as acting on the space of bivectors at that event like a linear operator acting on a vector space:
Then, it is natural to consider the problem of finding eigenvalues λ {\displaystyle \lambda } and eigenvectors (which are now referred to as eigenbivectors) X a b {\displaystyle X^{ab}} such that
In (four-dimensional) Lorentzian spacetimes, there is a six-dimensional space of antisymmetric bivectors at each event. However, the symmetries of the Weyl tensor imply that any eigenbivectors must belong to a four-dimensional subset.
Thus, the Weyl tensor (at a given event) can in fact have at most four linearly independent eigenbivectors.
The eigenbivectors of the Weyl tensor can occur with various multiplicities and any multiplicities among the eigenbivectors indicates a kind of algebraic symmetry of the Weyl tensor at the given event. The different types of Weyl tensor (at a given event) can be determined by solving a characteristic equation , in this case a quartic equation . All the above happens similarly to the theory of the eigenvectors of an ordinary linear operator.
These eigenbivectors are associated with certain null vectors in the original spacetime, which are called the principal null directions (at a given event).
The relevant multilinear algebra is somewhat involved (see the citations below), but the resulting classification theorem states that there are precisely six possible types of algebraic symmetry. These are known as the Petrov types :
The possible transitions between Petrov types are shown in the figure, which can also be interpreted as stating that some of the Petrov types are "more special" than others. For example, type I , the most general type, can degenerate to types II or D , while type II can degenerate to types III , N , or D .
Different events in a given spacetime can have different Petrov types. A Weyl tensor that has type I (at some event) is called algebraically general ; otherwise, it is called algebraically special (at that event). In General Relativity, type O spacetimes are conformally flat .
The Newman–Penrose formalism is often used in practice for the classification. Consider the following set of bivectors, constructed out of tetrads of null vectors (note that in some notations, symbols l and n are interchanged):
The Weyl tensor can be expressed as a combination of these bivectors through
where the { Ψ j } {\displaystyle \{\Psi _{j}\}} are the Weyl scalars and c.c. is the complex conjugate. The six different Petrov types are distinguished by which of the Weyl scalars vanish. The conditions are
Given a metric on a Lorentzian manifold M {\displaystyle M} , the Weyl tensor C {\displaystyle C} for this metric may be computed. If the Weyl tensor is algebraically special at some p ∈ M {\displaystyle p\in M} , there is a useful set of conditions, found by Lluis (or Louis) Bel and Robert Debever, [ 1 ] for determining precisely the Petrov type at p {\displaystyle p} . Denoting the Weyl tensor components at p {\displaystyle p} by C a b c d {\displaystyle C_{abcd}} (assumed non-zero, i.e., not of type O ), the Bel criteria may be stated as:
where k {\displaystyle k} is necessarily null and unique (up to scaling).
where k {\displaystyle k} is necessarily null and unique (up to scaling).
where k {\displaystyle k} is necessarily null and unique (up to scaling).
and
where ∗ C a b c d {\displaystyle {{}^{*}C}_{abcd}} is the dual of the Weyl tensor at p {\displaystyle p} .
In fact, for each criterion above, there are equivalent conditions for the Weyl tensor to have that type. These equivalent conditions are stated in terms of the dual and self-dual of the Weyl tensor and certain bivectors and are collected together in Hall (2004).
The Bel criteria find application in general relativity where determining the Petrov type of algebraically special Weyl tensors is accomplished by searching for null vectors.
According to general relativity , the various algebraically special Petrov types have some interesting physical interpretations, the classification then sometimes being called the classification of gravitational fields .
Type D regions are associated with the gravitational fields of isolated massive objects, such as stars. More precisely, type D fields occur as the exterior field of a gravitating object which is completely characterized by its mass and angular momentum. (A more general object might have nonzero higher multipole moments .) The two double principal null directions define "radially" ingoing and outgoing null congruences near the object which is the source of the field.
The electrogravitic tensor (or tidal tensor ) in a type D region is very closely analogous to the gravitational fields which are described in Newtonian gravity by a Coulomb type gravitational potential . Such a tidal field is characterized by tension in one direction and compression in the orthogonal directions; the eigenvalues have the pattern (-2,1,1). For example, a spacecraft orbiting the Earth experiences a tiny tension along a radius from the center of the Earth, and a tiny compression in the orthogonal directions. Just as in Newtonian gravitation, this tidal field typically decays like O ( r − 3 ) {\displaystyle O(r^{-3})} , where r {\displaystyle r} is the distance from the object.
If the object is rotating about some axis , in addition to the tidal effects, there will be various gravitomagnetic effects, such as spin-spin forces on gyroscopes carried by an observer. In the Kerr vacuum , which is the best known example of type D vacuum solution, this part of the field decays like O ( r − 4 ) {\displaystyle O(r^{-4})} .
Type III regions are associated with a kind of longitudinal gravitational radiation. In such regions, the tidal forces have a shearing effect. This possibility is often neglected, in part because the gravitational radiation which arises in weak-field theory is type N , and in part because type III radiation decays like O ( r − 2 ) {\displaystyle O(r^{-2})} , which is faster than type N radiation.
Type N regions are associated with transverse gravitational radiation, which is the type astronomers have detected with LIGO .
The quadruple principal null direction corresponds to the wave vector describing the direction of propagation of this radiation. It typically decays like O ( r − 1 ) {\displaystyle O(r^{-1})} , so the long-range radiation field is type N .
Type II regions combine the effects noted above for types D , III , and N , in a rather complicated nonlinear way.
Type O regions, or conformally flat regions, are associated with places where the Weyl tensor vanishes identically. In this case, the curvature is said to be pure Ricci . In a conformally flat region, any gravitational effects must be due to the immediate presence of matter or the field energy of some nongravitational field (such as an electromagnetic field ). In a sense, this means that any distant objects are not exerting any long range influence on events in our region. More precisely, if there are any time varying gravitational fields in distant regions, the news has not yet reached our conformally flat region.
Gravitational radiation emitted from an isolated system will usually not be algebraically special.
The peeling theorem describes the way in which, as one moves farther way from the source of the radiation, the various components of the radiation field "peel" off, until finally only type N radiation is noticeable at large distances. This is similar to the electromagnetic peeling theorem .
In some (more or less) familiar solutions, the Weyl tensor has the same Petrov type at each event:
More generally, any spherically symmetric spacetime must be of type D (or O ). All algebraically special spacetimes having various types of stress–energy tensor are known, for example, all the type D vacuum solutions.
Some classes of solutions can be invariantly characterized using algebraic symmetries of the Weyl tensor: for example, the class of non-conformally flat null electrovacuum or null dust solutions admitting an expanding but nontwisting null congruence is precisely the class of Robinson/Trautmann spacetimes . These are usually type II , but include type III and type N examples.
A. Coley, R. Milson, V. Pravda and A. Pravdová (2004) developed a generalization of algebraic classification to arbitrary spacetime dimension d {\displaystyle d} . Their approach uses a null frame basis approach, that is a frame basis containing two null vectors l {\displaystyle l} and n {\displaystyle n} , along with d − 2 {\displaystyle d-2} spacelike vectors. Frame basis components of the Weyl tensor are classified by their transformation properties under local Lorentz boosts . If particular Weyl components vanish, then l {\displaystyle l} and/or n {\displaystyle n} are said to be Weyl-Aligned Null Directions (WANDs). In four dimensions, l {\displaystyle l} is a WAND if and only if it is a principal null direction in the sense defined above. This approach gives a natural higher-dimensional extension of each of the various algebraic types II , D etc. defined above.
An alternative, but inequivalent, generalization was previously defined by de Smet (2002), based on a spinorial approach . However, de Smet's approach is restricted to 5 dimensions only. | https://en.wikipedia.org/wiki/Petrov_classification |
In quantum information theory , a mix of quantum mechanics and information theory , the Petz recovery map can be thought of a quantum analog of Bayes' theorem . Proposed by Dénes Petz , [ 1 ] the Petz recovery map is a quantum channel associated with a given quantum channel and quantum state. This recovery map is designed in a manner that, when applied to an output state resulting from the given quantum channel acting on an input state, it enables the inference of the original input state. In essence, the Petz recovery map serves as a tool for reconstructing information about the initial quantum state from its transformed counterpart under the influence of the specified quantum channel.
The Petz recovery map finds applications in various domains, including quantum retrodiction , [ 2 ] quantum error correction, [ 3 ] and entanglement wedge reconstruction for black hole physics. [ 4 ] [ 5 ]
Suppose we have a quantum state which is described by a density operator σ {\displaystyle \sigma } and a quantum channel E {\displaystyle {\mathcal {E}}} , the Petz recovery map is defined as [ 1 ] [ 6 ]
Notice that E † {\displaystyle {\mathcal {E}}^{\dagger }} is the Hilbert-Schmidt adjoint of E {\displaystyle {\mathcal {E}}} .
The Petz map has been generalized in various ways in the field of quantum information theory. [ 7 ] [ 8 ]
A crucial property of the Petz recovery map is its ability to function as a quantum channel in certain cases, making it an essential tool in quantum information theory.
Tr [ P σ , N ( X ) ] = Tr [ σ 1 2 E † ( E ( σ ) − 1 2 X E ( σ ) − 1 2 ) σ 1 2 ] = Tr [ σ E † ( E ( σ ) − 1 2 X E ( σ ) − 1 2 ) ] = Tr [ E ( σ ) E ( σ ) − 1 2 X E ( σ ) − 1 2 ] = Tr [ E ( σ ) − 1 2 E ( σ ) E ( σ ) − 1 2 X ] = Tr [ Π E ( σ ) X ] ≤ Tr [ X ] {\displaystyle {\begin{aligned}\operatorname {Tr} \left[{\mathcal {P}}_{\sigma ,{\mathcal {N}}}(X)\right]&=\operatorname {Tr} \left[\sigma ^{\frac {1}{2}}{\mathcal {E}}^{\dagger }\left({\mathcal {E}}(\sigma )^{-{\frac {1}{2}}}X{\mathcal {E}}(\sigma )^{-{\frac {1}{2}}}\right)\sigma ^{\frac {1}{2}}\right]\\&=\operatorname {Tr} \left[\sigma {\mathcal {E}}^{\dagger }\left({\mathcal {E}}(\sigma )^{-{\frac {1}{2}}}X{\mathcal {E}}(\sigma )^{-{\frac {1}{2}}}\right)\right]\\&=\operatorname {Tr} \left[{\mathcal {E}}(\sigma ){\mathcal {E}}(\sigma )^{-{\frac {1}{2}}}X{\mathcal {E}}(\sigma )^{-{\frac {1}{2}}}\right]\\&=\operatorname {Tr} \left[{\mathcal {E}}(\sigma )^{-{\frac {1}{2}}}{\mathcal {E}}(\sigma ){\mathcal {E}}(\sigma )^{-{\frac {1}{2}}}X\right]\\&=\operatorname {Tr} \left[\Pi _{{\mathcal {E}}(\sigma )}X\right]\\&\leq \operatorname {Tr} [X]\end{aligned}}}
From 1 and 2, when E ( σ ) {\displaystyle {\mathcal {E}}(\sigma )} is invertible, the Petz recovery map P σ , E {\displaystyle {\mathcal {P}}_{\sigma ,{\mathcal {E}}}} is a quantum channel, viz., a completely positive trace-preserving (CPTP) map. | https://en.wikipedia.org/wiki/Petz_recovery_map |
Pewter ( / ˈ p juː t ər / ) is a malleable metal alloy consisting of tin (85–99%), antimony (approximately 5–10%), copper (2%), [ 1 ] bismuth , [ 2 ] and sometimes silver . [ 3 ] In the past, it was an alloy of tin and lead , but most modern pewter, in order to prevent lead poisoning , is not made with lead. Pewter has a low melting point , around 170–230 °C (338–446 °F), depending on the exact mixture of metals. [ 4 ] [ 5 ] The word pewter is possibly a variation of " spelter ", a term for zinc alloys (originally a colloquial name for zinc). [ 6 ]
Pewter was first used around the beginning of the Bronze Age in the Near East . The earliest known piece of pewter was found in an Egyptian tomb, c. 1450 BC , [ 7 ] but it is unlikely that this was the first use of the material. Pewter was used for decorative metal items and tableware in ancient times by the Egyptians and later the Romans, and came into extensive use in Europe from the Middle Ages [ 2 ] until the various developments in pottery and glass-making during the 18th and 19th centuries. Pewter was a leading material for producing plates, cups, and bowls before the wide adoption of porcelain . Mass production of pottery, porcelain and glass products have almost universally replaced pewter in daily life, although pewter artifacts continue to be produced, mainly as decorative or specialty items. Pewter was also used around East Asia . Although some items still exist, [ 8 ] ancient Roman pewter is rare. [ 9 ]
Lidless mugs and lidded tankards may be the most familiar pewter artifacts from the late 17th and 18th centuries, although the metal was also used for many other items including porringers (shallow bowls), plates, dishes, basins, spoons, measures, flagons, communion cups, teapots, sugar bowls, beer steins (tankards), and cream jugs. In the early 19th century, changes in fashion caused a decline in the use of pewter flatware. At the same time, production increased of both cast and spun pewter tea sets, whale-oil lamps, candlesticks, and so on. Later in the century, pewter alloys were often used as a base metal for silver-plated objects.
In the late 19th century, pewter came back into fashion with the revival of medieval objects for decoration. New replicas of medieval pewter objects were created, and collected for decoration. Today, pewter is used in decorative objects, mainly collectible statuettes and figurines, game figures, aircraft and other models, (replica) coins, pendants, plated jewellery and so on. Certain athletic contests, such as the United States Figure Skating Championships , award pewter medals to fourth-place finishers. [ 10 ]
In antiquity, pewter was tin alloyed with lead and sometimes also copper . Older pewters with higher lead content are heavier, tarnish faster, and their oxidation has a darker, silver-gray color. [ 11 ] Pewters containing lead are no longer used in items that will come in contact with the human body (such as cups, plates, or jewelry), due to the toxicity of lead . Modern pewters are available that are completely free of lead, although many pewters containing lead are still being produced for other purposes. [ 12 ]
A typical European casting alloy contains 94% tin, 1% copper and 5% antimony . A European pewter sheet would contain 92% tin, 2% copper, and 6% antimony. Asian pewter, produced mostly in Malaysia , Singapore , and Thailand , contains a higher percentage of tin, usually 97.5% tin, 1% copper, and 1.5% antimony. This makes the alloy slightly softer. [ 7 ]
The term Mexican pewter is used for any of various alloys of aluminium that are used for decorative items. [ 13 ] [ 14 ] [ 15 ]
Pewter is also used to imitate platinum in costume jewelry.
Pewter, being a softer material, can be manipulated in various ways such as being cast , hammered, turned , spun and engraved .
Given that pewter is soft at room temperature, a pewter bell does not ring clearly. Cooling it in liquid nitrogen hardens it and enables it to ring, but also makes it more brittle. [ 16 ] | https://en.wikipedia.org/wiki/Pewter |
The Pfafstetter Coding System is a hierarchical method of hydrologically coding river basins . It was developed by the Brazilian engineer Otto Pfafstetter [ pt ] in 1989. [ 1 ] It is designed such that topological information is embedded in the code, which makes it easy to determine whether an event in one river basin will affect another by direct examination of their codes. [ 2 ]
In the 1950s, Pfafstetter suggested the use of a hierarchical system of coding river basins, [ 3 ] later described in a 1989 paper. [ 4 ] The method was applied to Brazilian water networks, and has been used in a number of other applications. [ 5 ] [ 6 ] [ 7 ]
The Pfafstetter system relies on the properties of the base-10 numbering system. In a water system to be coded, the main stem is defined as the path which drains the greatest area. The four major tributaries, in terms of water drainage, of the main stem are determined, and the water basin of each defined. This results in four tributary basins, as well as five inter-basin regions which are drained by the main stem. [ 2 ] [ 4 ]
Each region is then numbered from 1-9, with the downstream-most inter-basin region denoted 1; therefore, the inter-basin regions and tributary basins are numbered 1,3,5,7,9 and 2,4,6,8 respectively. The number 0 is reserved for closed drainage systems. [ 2 ] [ 4 ]
Each tributary basin is then coded in an identical manner, and the resulting number appended to the end of the tributary basin number. In this manner, the entire waterway may be coded in a recursive manner to an arbitrary precision. [ 2 ] [ 4 ]
The primary advantage of the Pfafstetter system is that the water drainage topology is directly described by the code:
Therefore, given a point with code A on the water system, a point with code B is downstream if: [ 4 ]
For example, segment 8835 is upstream of segments 8833 and 8811, but not segments 8832, 8821 or 9135.
The Pfaffstetter system is a particularly efficient system; n -digit codes can be used in a water system with 10 n segments. It compares favourably in this respect to the USGS HUC method. [ 4 ] | https://en.wikipedia.org/wiki/Pfafstetter_Coding_System |
The Pfeiffer effect is an optical phenomenon whereby the presence of an optically active compound influences the optical rotation of a racemic mixture of a second compound.
Racemic mixtures do not rotate plane polarized light , but the equilibrium concentration of the two enantiomers can shift from unity in the presence of a strongly interacting chiral species. Paul Pfeiffer , a student of Alfred Werner and inventor of the salen ligand , reported this phenomenon. [ 1 ] The first example of the effect is credited to Eligio Perucca , [ 2 ] who observed optical rotations in the visible part of the spectrum when crystals of sodium chlorate , which are chiral and colourless, were stained with a racemic dye. [ 3 ] The effect is attributed to the interaction of the optically pure substance with the second coordination sphere of the racemate.
This optics -related article is a stub . You can help Wikipedia by expanding it .
This stereochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pfeiffer_effect |
The Pfitzinger reaction (also known as the Pfitzinger-Borsche reaction ) is the chemical reaction of isatin with base and a carbonyl compound to yield substituted quinoline -4- carboxylic acids . [ 1 ] [ 2 ]
Several reviews have been published. [ 3 ] [ 4 ] [ 5 ]
The reaction of isatin with a base such as potassium hydroxide hydrolyses the amide bond to give the keto-acid 2 . This intermediate can be isolated, but is typically not. A ketone (or aldehyde ) will react with the aniline to give the imine ( 3 ) and the enamine ( 4 ). The enamine will cyclize and dehydrate to give the desired quinoline ( 5 ).
Reaction of N - acyl isatins with base gives 2- hydroxy - quinoline -4- carboxylic acids . [ 6 ] | https://en.wikipedia.org/wiki/Pfitzinger_reaction |
The Pfitzner–Moffatt oxidation , sometimes referred to as simply the Moffatt oxidation , is a chemical reaction for the oxidation of primary and secondary alcohols to aldehydes and ketones , respectively. The oxidant is a combination of dimethyl sulfoxide (DMSO) and dicyclohexylcarbodiimide (DCC). The reaction was first reported by J. Moffatt and his student K. Pfitzner in 1963. [ 1 ] [ 2 ]
The reaction requires one equivalent each of the diimide, which is the dehydrating agent, and the sulfoxide, the oxidant:
Typically the sulfoxide and diimide are used in excess. [ 3 ] The reaction cogenerates dimethyl sulfide and a urea . Dicyclohexylurea ((CyNH) 2 CO) can be difficult to remove from the product.
In terms of mechanism, the reaction is proposed to involve the intermediary of an sulfonium group, formed by a reaction between DMSO and the carbodiimide.
This species is highly reactive and is attacked by the alcohol. Rearrangement give an alkoxysulfonium ylide which decomposes to give dimethyl sulfide and the carbonyl compound.
This reaction has been largely displaced by the Swern oxidation , which also uses DMSO as an oxidant in the presence of an electrophilic activator. Swern oxidations tend to give higher yields and simpler workup; however, they typically employ cryogenic conditions. [ 4 ] [ 5 ] | https://en.wikipedia.org/wiki/Pfitzner–Moffatt_oxidation |
The Pfizer Award is awarded annually by the History of Science Society "in recognition of an outstanding book dealing with the history of science " that was "published in English during a period of three calendar years immediately preceding the year of competition." [ 1 ] | https://en.wikipedia.org/wiki/Pfizer_Award |
The Pfizer Award in Enzyme Chemistry , formerly known as the Paul-Lewis Award in Enzyme Chemistry [ 1 ] was established in 1945. Consisting of a gold medal and honorarium, its purpose is to stimulate fundamental research in enzyme chemistry by scientists not over forty years of age. The award is administered by the Division of Biological Chemistry of the American Chemical Society and sponsored by Pfizer . [ 2 ] [ 3 ] The award was terminated in 2022. [ 3 ]
Source: [ 4 ] | https://en.wikipedia.org/wiki/Pfizer_Award_in_Enzyme_Chemistry |
Pfu DNA polymerase is an enzyme found in the hyperthermophilic archaeon Pyrococcus furiosus , where it functions to copy the organism's DNA during cell division ( thermostable DNA polymerase ). In the laboratory setting, Pfu is used to amplify DNA in the polymerase chain reaction (PCR), where the enzyme serves the central function of copying a new strand of DNA during each extension step.
It is a family B DNA polymerase . It has an RNase H -like 3'-5' exonuclease domain, typical of B-family polymerase such as DNA polymerase II . [ 1 ]
Pfu DNA polymerase has superior thermostability and proofreading properties compared with Taq DNA polymerase . Unlike Taq DNA polymerase, Pfu DNA polymerase possesses 3' to 5' exonuclease proofreading activity, meaning that as the DNA is assembled from the 5' end to 3' end , the exonuclease activity immediately removes nucleotides misincorporated at the 3' end of the growing DNA strand. Consequently, Pfu DNA polymerase-generated PCR fragments will have fewer errors than Taq -generated PCR inserts.
Commercially available Pfu typically results in an error rate of 1 in 1.3 million base pairs and can yield 2.6% mutated products when amplifying 1 kb fragments using PCR. However, Pfu is slower and typically requires 1–2 minutes per cycle to amplify 1kb of DNA at 72 °C. Using Pfu DNA polymerase in PCR reactions also results in blunt-ended PCR products. [ 2 ]
Pfu DNA polymerase is hence superior to Taq DNA polymerase for techniques that require high-fidelity DNA synthesis, but can also be used in conjunction with Taq polymerase to obtain the fidelity of Pfu with the speed of Taq polymerase activity. [ 3 ]
Scientists led by Eric Mathur at the biotech company Stratagene, based in La Jolla, California , discovered Pfu DNA polymerase which exhibits significantly higher fidelity of replication than Taq DNA polymerase in 1991. [ 4 ] They received patents for exonuclease-deficient Pfu and the full Pfu in 1996. [ 5 ]
Other polymerases from Pyrococcus strains such as "Deep Vent" ( Q51334 ) from strain GB-D and Pwo DNA polymerase have also seen use. [ 3 ] | https://en.wikipedia.org/wiki/Pfu_DNA_polymerase |
Pentaphenylphosphorus is an organic phosphorane containing five phenyl groups connected to a central phosphorus atom. The phosphorus atom is considered to be in the +5 oxidation state . The chemical formula could be written as P(C 6 H 5 ) 5 or Ph 5 P, where Ph represents the phenyl group. It was discovered and reported in 1949 by Georg Wittig . [ 2 ]
Pentaphenylphosphorus can be formed by the action of phenyllithium on tetraphenylphosphonium bromide or tetraphenylphosphonium iodide. [ 3 ] The compound was produced during the course of Wittig's Nobel-prize-winning investigations of organophosphorus compounds. [ 2 ]
Pentaphenylphosphorus is trigonal bipyramidal, according to several determinations by X-ray crystallography . The axial and equatorial P-C bond lengths are 199 and 185 picometers , respectively. [ 4 ]
The monoclinic crystal has dimensions a=10.03, b=17.22 c=14.17 Å and β=112.0°. [ 4 ] Pentaphenyl phosphorus can also crystallise with solvent, (to form a solvate ) with tetrahydrofuran and cyclohexane . [ 5 ] [ 6 ]
On heating, pentaphenylphosphorus decomposes to form biphenyl and triphenylphosphine . [ 2 ]
Pentaphenylphosphorus reacts with acidic hydrogen to yield the tetraphenylphosphonium ion and benzene. [ 2 ] For example pentaphenylphosphorus reacts with carboxylic acids and sulfonic acids to yield the tetraphenylphosphonium salt of the carboxylate or sulfonate, and benzene. [ 7 ]
Pentaphenylphosphorus transfers a phenyl group to organomercury, and tin halides. For example pentaphenylphosphorus reacts with phenylmercury chloride to yield diphenyl mercury and tetraphenylphosphonium chloride. With tributyltin chloride , tributylphenyltin is produced. However the pentaphenylphosphorus reaction with triphenylbismuth difluoride , chloride or bromide makes triphenylbismuth and fluorobenzene , chlorobenzene or bromobenzene . This is probably because tetraphenylbismuth halides (Ph 4 BiF, Ph 4 BiCl, Ph 4 BiBr) spontaneously decompose as the halogen reacts with one phenyl group. [ 8 ]
When heated with carbon dioxide or sulfur, bicyclic compounds are formed, where the reactant bridges between one of the phenyl groups and the phosphorus. [ 9 ] | https://en.wikipedia.org/wiki/Ph5P |
PhEVER is a database of homologous gene families between viral sequences and sequences from cellular organisms . [ 1 ]
This Biological database -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/PhEVER |
Iodobenzene dichloride (PhICl 2 ) is a complex of iodobenzene with chlorine . As a reagent for organic chemistry, it is used as an oxidant and chlorinating agent .
Single-crystal X-ray crystallography has been used to determine its structure; as can be predicted by VSEPR theory , it adopts a T-shaped geometry about the central iodine atom. [ 2 ]
Iodobenzene dichloride is not stable and is not commonly available commercially. It is prepared by passing chlorine gas through a solution of iodobenzene in chloroform , from which it precipitates. [ 3 ] The same reaction has been reported at pilot plant scale (20 kg) as well. [ 4 ]
An alternate preparation involving the use of chlorine generated in situ by the action of sodium hypochlorite on hydrochloric acid has also been described. [ 5 ]
Iodobenzene dichloride is hydrolyzed by basic solutions to give iodosobenzene (PhIO) [ 6 ] and is oxidized by sodium hypochlorite to give iodoxybenzene (PhIO 2 ). [ 7 ]
In organic synthesis , iodobenzene dichloride is used as a reagent for the selective chlorination of alkenes . [ 1 ] and alkynes . [ 8 ] | https://en.wikipedia.org/wiki/PhICl2 |
Phage immunoprecipitation sequencing (PhIP-Seq) is method that combines barcoded DNA high-throughput sequencing and proteomics to determine the levels of binding of antibodies to epitopes . It has been used to study the autoantibody repertoire of autoimmune diseases like multiple sclerosis , type 2 diabetes and rheumatoid arthritis . [ 1 ] [ 2 ]
This genetics article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/PhIP-Seq |
PhTx-1 is a toxic fraction isolated from the venom of the Brazilian wandering spider Phoneutria nigriventer .
PhTx-1 has as its main toxin PnTx-1, with a molecular mass of 8,594.6 Da, composed of 77 aminoacid residues, 14 of which are cysteines. [ 1 ] PnTx-1, also known as Tx1, exerts an inhibitory effect on neuronal sodium channels (Nav1.2). [ 2 ] This component is considered highly pathogenic for primates . | https://en.wikipedia.org/wiki/PhTx-1 |
PhTx-2 is a toxic fraction of the venom of the Brazilian wandering spider Phoneutria nigriventer .
This fraction is responsible for most of the venom's effects, acts on voltage-gated ion channels , this fraction is composed of nine different peptides , of which PhTx-2-5 and PhTx-2-6 activate voltage-gated ion channels. [ 1 ] PhTx-2 has been shown to be related to the activation and delay of inactivation of neuronal sodium channels, leading to an increase in the concentration of neuronal Ca++ and the release of glutamate , resulting in the release of neurotransmitters such as acetylcholine and catecholamines . [ 2 ] Primates are more sensitive to the PhTx-1 & 2 components than in the case of mice , about 4 to 5 times more sensitive. The LD50 for a 70kg adult human is 6.3 mg, but the spider has only 1-2 mg and usually delivers 0.4 mg. | https://en.wikipedia.org/wiki/PhTx-2 |
Phaedon Avouris ( Greek : Φαίδων Αβούρης ; born 1945) leads IBM ’s nanoscience and nanotechnology research efforts. His research explores the application of molecular devices in computing and electronics. This includes experimental and theoretical studies on the electronics and photonics of carbon nanotubes (CNT) and graphene. [ 1 ]
Phaedon Avouris was born in 1945 in Athens , Greece . [ 2 ] In 1968, he received a BSc in chemistry from the Aristotle University in Thessaloniki , Greece. After postdoctoral work at the University of California, Los Angeles , he attended Michigan State University in 1974 and earned a PhD in physical chemistry . [ 2 ] Avouris was an adjunct research professor at Columbia University , NY in 2003 [ 3 ] and was appointed an Adjunct Research Professor in the ECE Department at the University of Illinois, Urbana-Champaign in 2016. [ 2 ]
Avouris is a member of the following academies and societies:
He is also a fellow of the:
Avouris's work has been recognized with awards from scientific institutions, including: | https://en.wikipedia.org/wiki/Phaedon_Avouris |
Phaeocystis globosa virus virophage , or PgVV , or Preplasmiviricota sp. Gezel-14T , [ 1 ] is a polinton -like virus, which are small DNA viruses that are found integrated in protist genomes. Similar to virophages , PgVV requires a helper virus to replicate . Phaeocystis globosa virus virophage has a parasitic relationship with its helper virus species Phaeocystis globosa virus (PgV). They are a species of giant virus that infect algae of the genus Phaeocystis . [ 2 ] [ 3 ] [ 4 ] [ 5 ] | https://en.wikipedia.org/wiki/Phaeocystis_globosa_virus_virophage |
Phaeton (alternatively Phaethon / ˈ f eɪ . ə θ ən / or Phaëton / ˈ f eɪ . ə t ən / ; from Ancient Greek : Φαέθων , romanized : Phaéthōn , pronounced [pʰa.é.tʰɔːn] ) is a hypothetical planet hypothesized by the Titius–Bode law to have existed between the orbits of Mars and Jupiter , the destruction of which supposedly led to the formation of the asteroid belt (including the dwarf planet Ceres ). The hypothetical planet was named for Phaethon , the son of the sun god Helios in Greek mythology , who attempted to drive his father's solar chariot for a day with disastrous results and was ultimately destroyed by Zeus . [ 1 ]
According to the hypothesized Titius–Bode law proposed in the 1700s to explain the spacing of planets in a solar system, a planet may have once existed between Mars and Jupiter. After learning of the regular sequence discovered by the German astronomer and mathematician Johann Daniel Titius , astronomer Johann E. Bode urged a search for the fifth planet corresponding to a gap in the sequence. (1) Ceres , the largest asteroid in the asteroid belt (now considered a dwarf planet ), was serendipitously discovered in 1801 by the Italian Giuseppe Piazzi and found to closely match the "empty" position in Titius' sequence , which led many [ who? ] to believe it to be the "missing planet". However, in 1802 astronomer Heinrich Wilhelm Matthäus Olbers discovered and named the asteroid (2) Pallas , a second object in roughly the same orbit as (1) Ceres.
Olbers proposed that these two discoveries were the fragments of a disrupted planet that had formerly orbited the Sun, [ 2 ] and predicted that more of these pieces would be found. The discovery of the asteroid (3) Juno by Karl Ludwig Harding and (4) Vesta by Olbers, buttressed his hypothesis. In 1823, German linguist and retired teacher Johann Gottlieb Radlof [ de ] called Olbers' destroyed planet Phaëthon , linking it to the Greek myths and legends about Phaethon and others. [ 3 ]
In 1927, Franz Xaver Kugler wrote a short book titled Sibyllinischer Sternkampf und Phaëthon in naturgeschichtlicher Beleuchtung (The Sybilline Battle of the Stars and Phaeton Seen as Natural History). [ 4 ] [ 5 ] The central idea in Kugler's book is that the myth of Phaethon was based on a real event: Making use of ancient sources, Kugler argued that Phaeton had been a very bright celestial object that appeared around 1500 BC which fell to Earth not long afterwards as a shower of large meteorites, causing catastrophic fires and floods in Africa and elsewhere. [ citation needed ]
Hypotheses regarding the formation of the asteroid belt from the destruction of a hypothetical fifth planet are today collectively referred to as " the disruption theory ". These hypotheses state that there was once a major planetary member of the Solar System circulating in the present gap between Mars and Jupiter, which was destroyed by one or more of the following hypothetical processes: [ citation needed ]
In 1953, Soviet Russian astronomer Ivan I. Putilin suggested that Phaeton was destroyed due to centrifugal forces , giving it a diameter of approximately 6,880 kilometres (4,280 mi) (slightly larger than Mars' diameter of 6,779 kilometres [4,212 mi]) and a rotational speed of 2.6 hours. Eventually, the planet became so distorted that parts of it near its equator were spun off into space. Outgassing of gases once stored in Phaeton's interior caused multiple explosions, sending material into space and forming asteroid families . However, his hypothesis was not widely accepted. Two years later in 1955, Odesan astronomer Konstantin N. Savchenko suggested that Ceres, Pallas, Juno, and Vesta were not fragments of Phaeton, but rather its former moons. Phaeton had an additional fifth satellite, assumed to be the size of Ceres, orbiting near the planet's Hill sphere , and thus more subject to gravitational perturbations from Jupiter. As a result, the fifth satellite became tidally detached and orbited the Sun for millions of years afterward, making periodic close misses with Phaeton that slowly increased its velocity. Once the escaped satellite re-entered Phaeton's Hill sphere, it collided with the planet at high speed, shattering it while Ceres, Pallas, Juno, and Vesta assumed heliocentric orbits. Simulations showed that for such a Ceres-sized body to shatter Phaeton, it would need to be travelling at nearly 20 kilometres per second (12 mi/s). [ 6 ]
The disrupted planet hypothesis was also supported by French–Italian mathematician and astronomer Joseph-Louis Lagrange in 1814; [ 7 ] Canadian geologist Reginald Daly in 1943; [ 8 ] American geochemists Harrison Brown and Clair Patterson in 1948; [ 9 ] Soviet academics Alexander Zavaritskiy in 1948, Vasily Fesenkov in 1950 (who later rejected his own model) and Otto Schmidt (died 1956); [ 6 ] British–Canadian astronomer Michael Ovenden in 1972–1973; [ 10 ] [ 11 ] and American astronomer Donald Menzel (1901–1976) in 1978. [ 12 ] Ovenden suggested that the planet be named " Krypton " after the destroyed native world of Superman , as well as believing it to have been a gas giant roughly eighty-five to ninety Earth masses in mass and nearly the size of Saturn . [ 10 ]
Today, the Phaeton hypothesis has been superseded by the accretion model . [ 13 ] Most astronomers today believe that the asteroids in the main belt are remnants of the protoplanetary disk that never formed a planet and that in this region the amalgamation of protoplanets into a planet was prevented by the disruptive gravitational perturbations of Jupiter during the formative period of the Solar System . [ citation needed ]
Some scientists and non-scientists continue to advocate for the existence and destruction of a Phaeton-like planet.
Zecharia Sitchin suggested that the goddess known to the Sumerians as Tiamat in fact relates to a planet that was destroyed by a rogue planet known as Nibiru , creating both Earth and the asteroid belt. [ 14 ] His work is widely regarded as pseudoscience . [ 15 ]
The astronomer and author Tom Van Flandern held that Phaeton (which he called "Planet V", with V representing the Roman numeral for five and not to be confused with the other postulated former fifth planet not attributed to the formation of the asteroid belt ) exploded through some internal mechanism. In his "Exploded Planet Hypothesis 2000", he lists possible reasons for its explosion: a runaway nuclear reaction of uranium in its core, a change of state as the planet cooled down creating a density phase change, or through continual absorption of heat in the core from gravitons . Van Flandern even suggested that Mars itself may have been a moon of Planet V, due to its craters hinting to exposure to meteorite storms and its relatively low density compared to the other inner planets. [ 16 ] [ 17 ] [ 18 ]
In 1972, Soyuzmultfilm studios produced an animated short film titled Phaeton: The Son of Sun ( Russian : Фаэтон – Сын Солнца ), directed by Vasiliy Livanov , in which the asteroid belt is portrayed as the remains of a planet. The film also has numerous references to ancient astronauts . [ 19 ] [ 20 ]
The hypothetical former fifth planet has been referenced in fiction since at least the late 1800s. [ 21 ] [ 22 ] In science fiction , the planet is often called "Bodia" after Johann Elert Bode . [ 22 ] [ 23 ] By the pulp era of science fiction , Bodia was a recurring theme. In these stories it is typically similar to Earth and inhabited by humans, often advanced humans and occasionally the ancestors of humans on Earth. [ 24 ] [ 23 ] [ 25 ] [ 26 ] Following the invention of the atomic bomb in 1945, stories of this planetary destruction became increasingly common, encouraged by the advent of a plausible-seeming means of disintegration. [ 27 ] Several works of the 1950s used the idea to warn of the dangers of nuclear weapons. [ 21 ] [ 22 ] [ 28 ] The concept has since largely been relegated to deliberately retro works. [ 29 ]
Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Local Hole → Observable universe → Universe Each arrow ( → ) may be read as "within" or "part of". | https://en.wikipedia.org/wiki/Phaeton_(hypothetical_planet) |
Phage-assisted continuous evolution ( PACE ) is a phage -based technique for the automated directed evolution of proteins. It relies on relating the desired activity of a target protein with the fitness of an infectious bacteriophage which carries the protein's corresponding gene. Proteins with greater desired activity hence confer greater infectivity to their carrier phage. More infectious phage propagate more effectively, selecting for advantageous mutations. Genetic variation is generated using error-prone polymerases on the phage vectors , and over time the protein accumulates beneficial mutations. This technique is notable for performing hundreds of rounds of selection with minimal human intervention.
The central component of PACE is a fixed-volume vessel known as the “lagoon”. The lagoon contains M13 bacteriophage vectors carrying the gene of interest (known as the selection plasmid, or SP), as well as host E. coli cells that allow the phage to replicate. The lagoon is constantly diluted via the addition and draining of liquid media containing E. coli cells. The liquid flow rate is set such that the dilution rate is faster than the rate of E. coli reproduction but slower than the rate of phage reproduction. Hence, a fresh supply of E. coli cells is constantly present in the lagoon, but phage can only be retained via sufficiently fast replication. [ 1 ]
Phage replication requires E. coli infection, which, for M13 phage, relies on protein III (pIII). [ 2 ] When using PACE, the phage vectors lack the gene to produce pIII. Instead, the production of pIII is tied with the activity of the protein of interest via a mechanism that varies per use case, oftentimes involves an extra plasmid containing the pIII-expressing gene III (gIII) known as the accessory plasmid, or AP. Notably, production of infectious phage scales with the production of pIII. [ 3 ] Hence, the better the activity of the protein, the higher the rate of pIII production, and the more infectious phage are generated for that particular gene.
Using error-prone polymerases (encoded on the mutagenesis plasmid, or MP), genetic variation is introduced into the protein gene portion of the phage vectors. Due to the selective pressures applied by the constant draining of the lagoon, only phages that can replicate fast enough can be retained in the lagoon, so over time beneficial mutations accumulate in phage replicating in the lagoon. In this manner, rounds of evolution are continuously performed, allowing hundreds of rounds to elapse with little human intervention. [ 1 ]
In the initial paper pioneering this technique, T7 RNA polymerases were evolved to recognize different promoters , such as the T3 or SP6 promoters. [ 4 ] This was done by making the target promoter the sole promoter for gIII. [ 5 ] Hence, mutant polymerases with greater specificity for the desired promoter caused greater pIII production. This resulted in polymerases with ~3-4 orders of magnitude greater activity for the target promoter than the original T3 promoter. [ 4 ] While this original PACE system only performed positive selection, a variant was developed that allowed for negative selection as well. This is done by linking undesired activity to the production of non-functional pIII, which decreases the amount of infectious phage made. [ 6 ]
Proteases have been evolved to cut different peptides using PACE. In these systems, the desired protease cut site is used to link a T7 RNA polymerase and a T7 lysozyme . The T7 lysozyme prevents the T7 polymerase from transcribing gIII. When the peptide linker is cleaved, the T7 polymerase is activated, allowing for the transcription of the pIII gene. This method was used to create a TEV protease with a significantly different peptide substrate. [ 6 ] [ 7 ]
Using PACE, aminoacyl-tRNA synthetases (aaRSs) were evolved for noncanonical amino acids as well. Activity of an aaRS is linked to pIII production by the addition of a TAG stop codon in the middle of gIII. Synthetases that aminoacylate the TAG codon's suppressor tRNA prevents stop codon activity, allowing for production of functional pIII. Using this system, aaRSs were evolved that utilize non-canonical amino acids p -nitro-phenyalanine, iodophenylalanine, and Boc-lysine. [ 8 ]
Protein-protein interactions have been evolved using PACE as well. Under this scheme, the target protein is fused with a DNA binding protein, which binds to a target sequence placed upstream of the gIII promoter. The protein undergoing evolution is fused with an RNA polymerase. The better the protein-protein interaction, the more transcription of pIII occurs, allowing the evolution of the protein-protein interaction under PACE conditions. [ 6 ] This method was used to evolve Bacillus thuringiensis endotoxin variants that can overcome insect toxin resistance. [ 6 ] [ 9 ]
PACE was used to evolve APOBEC1 for greater soluble expression. APOBEC1 is a cytidine deaminase that has found use in base editors to catalyze the single nucleotide edit C-->T. [ 10 ] In E. coli , APOBEC1 usually falls out of solution into the insoluble fraction. [ 11 ] To evolve APOBEC1 for better soluble expression, the N-terminus of a T7 polymerase was fused to APOBEC1, with the remaining portion of the polymerase separately expressed. The T7 polymerase can only function when the N-terminus portion can bind to the rest of the polymerase. Since APOBEC1 must be properly folded for the N-terminus portion to be exposed properly, T7 polymerase activity is correlated to APOBEC1 folding. As follows, pIII transcription and production is linked with APOBEC1 soluble expression via the T7 polymerase. Using this approach, the soluble expression of APOBEC1 was increased by 4 fold with no change in function. [ 7 ] [ 9 ]
PACE was also used to create a more catalytically active deoxyadenosine deaminase. Deoxyadenosine deaminase is used in base editors to perform the single nucleotide edit A-->T. This was done by placing adenosine -containing stop codons in the gene for T7 polymerase. If the base editor is able to correct the error, functional T7 polymerase is produced, allowing production of pIII. Using this system, they evolved a deoxyadenosine deaminase with 590 fold activity compared to wild type. [ 12 ] | https://en.wikipedia.org/wiki/Phage-assisted_continuous_evolution |
The Phage-ligand technology is a technology to detect, bind and remove bacteria and bacterial toxins by using highly specific bacteriophage derived proteins. [ 1 ]
The host recognition of bacteriophages occur via bacteria-binding proteins that have strong binding affinities to specific protein or carbohydrate structures on the surface of the bacterial host. At the end of the infection life cycle the bacteria-lysing Endolysin is synthesized and degrades the bacterial peptidoglycan cell wall , resulting in lysis (and therefore killing) of the bacterial cell.
Bacteriophage derived proteins are used for detection and removal of bacteria [ 2 ] [ 3 ] and bacterial components (especially endotoxin contaminations ) in pharmaceutical and biological products, human diagnostics, food, [ 4 ] [ 5 ] and decolonization of bacteria causing nosocomial infections (e.g. MRSA ).
Protein modifications allow the biotechnological adaption to specific requirements. | https://en.wikipedia.org/wiki/Phage-ligand_technology |
Phage display is a laboratory technique for the study of protein–protein , protein – peptide , and protein– DNA interactions that uses bacteriophages ( viruses that infect bacteria ) to connect proteins with the genetic information that encodes them. [ 1 ] In this technique, a gene encoding a protein of interest is inserted into a phage coat protein gene, causing the phage to "display" the protein on its outside while containing the gene for the protein on its inside, resulting in a connection between genotype and phenotype . The proteins that the phages are displaying can then be screened against other proteins, peptides or DNA sequences, in order to detect interaction between the displayed protein and those of other molecules. In this way, large libraries of proteins can be screened and amplified in a process called in vitro selection, which is analogous to natural selection .
The most common bacteriophages used in phage display are M13 and fd filamentous phage , [ 2 ] [ 3 ] though T4 , [ 4 ] T7 , and λ phage have also been used.
Phage display was first described by George P. Smith in 1985, when he demonstrated the display of peptides on filamentous phage (long, thin viruses that infect bacteria) by fusing the virus's capsid protein to one peptide out of a collection of peptide sequences. [ 1 ] This displayed the different peptides on the outer surfaces of the collection of viral clones, where the screening step of the process isolated the peptides with the highest binding affinity. In 1988, Stephen Parmley and George Smith described biopanning for affinity selection and demonstrated that recursive rounds of selection could enrich for clones present at 1 in a billion or less. [ 5 ] In 1990, Jamie Scott and George Smith described creation of large random peptide libraries displayed on filamentous phage. [ 6 ] Phage display technology was further developed and improved by groups at the Laboratory of Molecular Biology with Greg Winter and John McCafferty , The Scripps Research Institute with Richard Lerner and Carlos Barbas and the German Cancer Research Center with Frank Breitling and Stefan Dübel for display of proteins such as antibodies for therapeutic protein engineering . Smith and Winter were awarded a half share of the 2018 Nobel Prize in chemistry for their contribution to developing phage display. [ 7 ] A patent by George Pieczenik claiming priority from 1985 also describes the generation of peptide libraries. [ 8 ]
Like the two-hybrid system , phage display is used for the high-throughput screening of protein interactions. In the case of M13 filamentous phage display, the DNA encoding the protein or peptide of interest is ligated into the pIII or pVIII gene, encoding either the minor or major coat protein , respectively. Multiple cloning sites are sometimes used to ensure that the fragments are inserted in all three possible reading frames so that the cDNA fragment is translated in the proper frame. The phage gene and insert DNA hybrid is then inserted (a process known as " transduction ") into E. coli bacterial cells such as TG1, SS320, ER2738, or XL1-Blue E. coli . If a " phagemid " vector is used (a simplified display construct vector) phage particles will not be released from the E. coli cells until they are infected with helper phage , which enables packaging of the phage DNA and assembly of the mature virions with the relevant protein fragment as part of their outer coat on either the minor (pIII) or major (pVIII) coat protein.
By immobilizing a relevant DNA or protein target(s) to the surface of a microtiter plate well, a phage that displays a protein that binds to one of those targets on its surface will remain while others are removed by washing. Those that remain can be eluted , used to produce more phage (by bacterial infection with helper phage) and to produce a phage mixture that is enriched with relevant (i.e. binding) phage. The repeated cycling of these steps is referred to as 'panning' , in reference to the enrichment of a sample of gold by removing undesirable materials.
Phage eluted in the final step can be used to infect a suitable bacterial host, from which the phagemids can be collected and the relevant DNA sequence excised and sequenced to identify the relevant, interacting proteins or protein fragments. [ citation needed ]
The use of a helper phage can be eliminated by using 'bacterial packaging cell line' technology. [ 9 ]
Elution can be done combining low-pH elution buffer with sonification, which, in addition to loosening the peptide-target interaction, also serves to detach the target molecule from the immobilization surface. This ultrasound -based method enables single-step selection of a high-affinity peptide. [ 10 ]
Applications of phage display technology include determination of interaction partners of a protein (which would be used as the immobilised phage "bait" with a DNA library consisting of all coding sequences of a cell, tissue or organism) so that the function or the mechanism of the function of that protein may be determined. [ 11 ] Phage display is also a widely used method for in vitro protein evolution (also called protein engineering ). As such, phage display is a useful tool in drug discovery . It is used for finding new ligands (enzyme inhibitors, receptor agonists and antagonists) to target proteins. [ 12 ] [ 13 ] [ 14 ] The technique is also used to determine tumour antigens (for use in diagnosis and therapeutic targeting) [ 15 ] and in searching for protein-DNA interactions [ 16 ] using specially-constructed DNA libraries with randomised segments. Recently, phage display has also been used in the context of cancer treatments - such as the adoptive cell transfer approach. [ 17 ] In these cases, phage display is used to create and select synthetic antibodies that target tumour surface proteins. [ 17 ] These are made into synthetic receptors for T-Cells collected from the patient that are used to combat the disease. [ 18 ]
Competing methods for in vitro protein evolution include yeast display , bacterial display , ribosome display , and mRNA display . [ citation needed ]
The invention of antibody phage display revolutionised antibody drug discovery. Initial work was done by laboratories at the MRC Laboratory of Molecular Biology ( Greg Winter and John McCafferty ), the Scripps Research Institute (Richard Lerner and Carlos F. Barbas) and the German Cancer Research Centre (Frank Breitling and Stefan Dübel). [ 19 ] [ 20 ] [ 21 ] In 1991, The Scripps group reported the first display and selection of human antibodies on phage. [ 22 ] This initial study described the rapid isolation of human antibody Fab that bound tetanus toxin and the method was then extended to rapidly clone human anti-HIV-1 antibodies for vaccine design and therapy. [ 23 ] [ 24 ] [ 25 ] [ 26 ] [ 27 ]
Phage display of antibody libraries has become a powerful method for both studying the immune response as well as a method to rapidly select and evolve human antibodies for therapy. Antibody phage display was later used by Carlos F. Barbas at The Scripps Research Institute to create synthetic human antibody libraries, a principle first patented in 1990 by Breitling and coworkers (Patent CA 2035384), thereby allowing human antibodies to be created in vitro from synthetic diversity elements. [ 28 ] [ 29 ] [ 30 ] [ 31 ]
Antibody libraries displaying millions of different antibodies on phage are often used in the pharmaceutical industry to isolate highly specific therapeutic antibody leads, for development into antibody drugs primarily as anti-cancer or anti-inflammatory therapeutics. One of the most successful was adalimumab , discovered by Cambridge Antibody Technology as D2E7 and developed and marketed by Abbott Laboratories . Adalimumab, an antibody to TNF alpha , was the world's first fully human antibody [ 32 ] to achieve annual sales exceeding $1bn. [ 33 ]
Below is the sequence of events that are followed in phage display screening to identify polypeptides that bind with high affinity to desired target protein or DNA sequence: [ citation needed ]
pIII is the protein that determines the infectivity of the virion. pIII is composed of three domains (N1, N2 and CT) connected by glycine-rich linkers. [ 34 ] The N2 domain binds to the F pilus during virion infection freeing the N1 domain which then interacts with a TolA protein on the surface of the bacterium. [ 34 ] Insertions within this protein are usually added in position 249 (within a linker region between CT and N2), position 198 (within the N2 domain) and at the N-terminus (inserted between the N-terminal secretion sequence and the N-terminus of pIII). [ 34 ] However, when using the BamHI site located at position 198 one must be careful of the unpaired Cysteine residue (C201) that could cause problems during phage display if one is using a non-truncated version of pIII. [ 34 ]
An advantage of using pIII rather than pVIII is that pIII allows for monovalent display when using a phagemid (plasmid derived from Ff phages ) combined with a helper phage. Moreover, pIII allows for the insertion of larger protein sequences (>100 amino acids) [ 35 ] and is more tolerant to it than pVIII. However, using pIII as the fusion partner can lead to a decrease in phage infectivity leading to problems such as selection bias caused by difference in phage growth rate [ 36 ] or even worse, the phage's inability to infect its host. [ 34 ] Loss of phage infectivity can be avoided by using a phagemid plasmid and a helper phage so that the resultant phage contains both wild type and fusion pIII. [ 34 ]
cDNA has also been analyzed using pIII via a two complementary leucine zippers system, [ 37 ] Direct Interaction Rescue [ 38 ] or by adding an 8-10 amino acid linker between the cDNA and pIII at the C-terminus. [ 39 ]
pVIII is the main coat protein of Ff phages. Peptides are usually fused to the N-terminus of pVIII. [ 34 ] Usually peptides that can be fused to pVIII are 6-8 amino acids long. [ 34 ] The size restriction seems to have less to do with structural impediment caused by the added section [ 40 ] and more to do with the size exclusion caused by pIV during coat protein export. [ 40 ] Since there are around 2700 copies of the protein on a typical phages, it is more likely that the protein of interest will be expressed polyvalently even if a phagemid is used. [ 34 ] This makes the use of this protein unfavorable for the discovery of high affinity binding partners. [ 34 ]
To overcome the size problem of pVIII, artificial coat proteins have been designed. [ 41 ] An example is Weiss and Sidhu's inverted artificial coat protein (ACP) which allows the display of large proteins at the C-terminus. [ 41 ] The ACP's could display a protein of 20kDa, however, only at low levels (mostly only monovalently). [ 41 ]
pVI has been widely used for the display of cDNA libraries. [ 34 ] The display of cDNA libraries via phage display is an attractive alternative to the yeast-2-hybrid method for the discovery of interacting proteins and peptides due to its high throughput capability. [ 34 ] pVI has been used preferentially to pVIII and pIII for the expression of cDNA libraries because one can add the protein of interest to the C-terminus of pVI without greatly affecting pVI's role in phage assembly. This means that the stop codon in the cDNA is no longer an issue. [ 42 ] However, phage display of cDNA is always limited by the inability of most prokaryotes in producing post-translational modifications present in eukaryotic cells or by the misfolding of multi-domain proteins.
While pVI has been useful for the analysis of cDNA libraries, pIII and pVIII remain the most utilized coat proteins for phage display. [ 34 ]
In an experiment in 1995, display of Glutathione S-transferase was attempted on both pVII and pIX and failed. [ 43 ] However, phage display of this protein was completed successfully after the addition of a periplasmic signal sequence (pelB or ompA) on the N-terminus. [ 44 ] In a recent study, it has been shown that AviTag, FLAG and His could be displayed on pVII without the need of a signal sequence. Then the expression of single chain Fv's (scFv), and single chain T cell receptors (scTCR) were expressed both with and without the signal sequence. [ 45 ]
PelB (an amino acid signal sequence that targets the protein to the periplasm where a signal peptidase then cleaves off PelB) improved the phage display level when compared to pVII and pIX fusions without the signal sequence. However, this led to the incorporation of more helper phage genomes rather than phagemid genomes. In all cases, phage display levels were lower than using pIII fusion. However, lower display might be more favorable for the selection of binders due to lower display being closer to true monovalent display. In five out of six occasions, pVII and pIX fusions without pelB was more efficient than pIII fusions in affinity selection assays. The paper even goes on to state that pVII and pIX display platforms may outperform pIII in the long run. [ 45 ]
The use of pVII and pIX instead of pIII might also be an advantage because virion rescue may be undertaken without breaking the virion-antigen bond if the pIII used is wild type. Instead, one could cleave in a section between the bead and the antigen to elute. Since the pIII is intact it does not matter whether the antigen remains bound to the phage. [ 45 ]
The issue of using Ff phages for phage display is that they require the protein of interest to be translocated across the bacterial inner membrane before they are assembled into the phage. [ 46 ] Some proteins cannot undergo this process and therefore cannot be displayed on the surface of Ff phages. In these cases, T7 phage display is used instead. [ 46 ] In T7 phage display, the protein to be displayed is attached to the C-terminus of the gene 10 capsid protein of T7. [ 46 ]
The disadvantage of using T7 is that the size of the protein that can be expressed on the surface is limited to shorter peptides because large changes to the T7 genome cannot be accommodated like it is in M13 where the phage just makes its coat longer to fit the larger genome within it. However, it can be useful for the production of a large protein library for scFV selection where the scFV is expressed on an M13 phage and the antigens are expressed on the surface of the T7 phage. [ 47 ]
Databases and computational tools for mimotopes have been an important part of phage display study. [ 48 ] Databases, [ 49 ] programs and web servers [ 50 ] have been widely used to exclude target-unrelated peptides, [ 51 ] characterize small molecules-protein interactions and map protein-protein interactions. Users can use three dimensional structure of a protein and the peptides selected from phage display experiment to map conformational epitopes. Some of the fast and efficient computational methods are available online. [ 50 ]
Competing techniques: | https://en.wikipedia.org/wiki/Phage_display |
Bacteriophages ( phages ), potentially the most numerous "organisms" on Earth , are the viruses of bacteria (more generally, of prokaryotes [ 1 ] ). Phage ecology is the study of the interaction of bacteriophages with their environments . [ 2 ]
Phages are obligate intracellular parasites meaning that they are able to reproduce only while infecting bacteria. Phages therefore are found only within environments that contain bacteria. Most environments contain bacteria, including our own bodies (called normal flora ). Often these bacteria are found in large numbers. As a consequence, phages are found almost everywhere. [ citation needed ]
As a rule of thumb , many phage biologists expect that phage population densities will exceed bacterial densities by a ratio of 10-to-1 or more (VBR or virus-to-bacterium ratio; see [ 3 ] for a summary of actual data). As there exist estimates of bacterial numbers on Earth of approximately 10 30 , [ 4 ] there consequently is an expectation that 10 31 or more individual virus (mostly phage [ 5 ] ) particles exist [1] , making phages the most numerous category of " organisms " on our planet.
Bacteria (along with archaea ) appear to be highly diverse and there possibly are millions of species. [ 6 ] Phage-ecological interactions therefore are quantitatively vast: huge numbers of interactions. Phage-ecological interactions are also qualitatively diverse: There are huge numbers of environment types, bacterial-host types, [ 7 ] and also individual phage types [ 8 ]
The study of phage ecology reflects established scientific disciplines in ecological studies in scope, the most obvious being general ecology . Accordingly, phage ecology is treated under the following heads— "organismal" ecology , population ecology , community ecology , and ecosystem ecology . Phage ecology also may be considered (though mostly less well formally explored) from perspectives of phage behavioral ecology , evolutionary ecology , functional ecology , landscape ecology , mathematical ecology, molecular ecology , physiological ecology (or ecophysiology), and spatial ecology . Phage ecology additionally draws (extensively) from microbiology , particularly in terms of environmental microbiology , but also from an enormous catalog (90 years) of study of phage and phage-bacterial interactions in terms of their physiology and, especially, their molecular biology . [ citation needed ]
Phage "organismal" ecology is primarily the study of the evolutionary ecological impact of phage growth parameters:
Another way of envisioning phage "organismal" ecology is that it is the study of phage adaptations that contribute to phage survival and transmission to new hosts or environments. Phage "organismal" ecology is the most closely aligned of phage ecology disciplines with the classical molecular and molecular genetic analyses of bacteriophage.
From the perspective of ecological subdisciplines , we can also consider phage behavioral ecology , functional ecology , and physiological ecology under the heading of phage "organismal" ecology. However, as noted, these subdisciplines are not as well developed as more general considerations of phage "organismal" ecology. Phage growth parameters often evolve over the course of phage experimental adaptation studies.
In the mid 1910s, when phage were first discovered, the concept of phage was very much a whole-culture phenomenon (like much of microbiology [ 11 ] ), where various types of bacterial cultures (on solid media , in broth ) were visibly cleared by phage action. Though from the start there was some sense, especially by Fėlix d'Hėrelle , that phage consisted of individual " organisms ", in fact it wasn't until the late 1930s through the 1940s that phages were studied, with rigor, as individuals, e.g., by electron microscopy and single-step growth experiments. [ 12 ] Note, though, that for practical reasons much of "organismal" phage study is of their properties in bulk culture (many phage) rather than the properties of individual phage virions or individual infections. [ citation needed ]
This somewhat whole-organismal view of phage biology saw its heyday during the 1940s and 1950s, before giving way to much more biochemical , molecular genetic , and molecular biological analyses of phages, as seen during the 1960s and onward. This shift, paralleled in much of the rest of microbiology [2] , represented a retreat from a much more ecological view of phages (first as bacterial killers, and then as organisms unto themselves). However, the organismal view of phage biology lives on as a foundation of phage ecological understanding. Indeed, it represents a key thread that ties together the ecological thinking on phage ecology with the more "modern" considerations of phage as molecular model systems . [ citation needed ]
The basic experimental toolkit of phage "organismal" ecology consists of the single-step growth (or one-step growth; [ 12 ] ) experiment and the phage adsorption curve. [ 13 ] Single-step growth is a means of determining the phage latent period ( example ), which is approximately equivalent (depending on how it is defined) to the phage period of infection. Single-step growth experiments also are employed to determine a phage's burst size , which is the number of phage (on average) that are produced per phage-infected bacterium. [ citation needed ]
The adsorption curve is obtained by measuring the rate at which phage virion particles (see Virion#Structure ) attach to bacteria. This is usually done by separating free phage from phage-infected bacteria in some manner so that either the loss of not currently infecting (free) phage or the gain of infected bacteria may be measured over time. [ citation needed ]
A population is a group of individuals which either do or can interbreed or, if incapable of interbreeding, then are recently derived from a single individual (a clonal population ). Population ecology considers characteristics that are apparent in populations of individuals but either are not apparent or are much less apparent among individuals. These characteristics include so-called intraspecific interactions, that is between individuals making up the same population, and can include competition as well as cooperation . Competition can be either in terms of rates of population growth (as seen especially at lower population densities in resource-rich environments) or in terms of retention of population sizes (seen especially at higher population densities where individuals are directly competing over limited resources ). Respectively, these are population-density independent and dependent effects. [ citation needed ]
Phage population ecology considers issues of rates of phage population growth, but also phage-phage interactions as can occur when two or more phage adsorb an individual bacterium.
A community consists of all of the biological individuals found within a given environment (more formally, within an ecosystem ), particularly when more than one species is present. Community ecology studies those characteristics of communities that either are not apparent or which are much less apparent if a community consists of only a single population . Community ecology thus deals with interspecific interactions. Interspecific interactions, like intraspecific interactions, can range from cooperative to competitive but also to quite antagonistic (as are seen, for example, with predator-prey interactions ). An important consequence of these interactions is coevolution .
The interaction of phage with bacteria is the primary concern of phage community ecologists. Bacteria have developed mechanisms that prevent phages from having an effect on them, which has led to this evolutionary arms race between the phages and their host bacteria. [ 14 ] Bacterial resistance to phages puts pressure on the phages to develop stronger effects on the bacteria. The Red Queen hypothesis describes this relationship, as the organisms must constantly adapt and evolve in order to survive. [ 15 ] This relationship is important to understand as phages are now being used for more practical and medicinal purposes.
Bacteria have developed multiple defense mechanisms to fight off the effects of bacteriophages. [ 16 ] In experimentation, amount of resistance can be determined by how much of a plate (generally agar with bacteria, infected with phages) ends up being clear. The clearer, the less resistant as more bacteria have been lysed . [ 17 ] The most common of these defense mechanisms is called the restriction-modification system (RM system). In this system, foreign DNA trying to enter the bacterial host is restricted by endonucleases that recognize specific base pairs within the DNA, while the DNA of the cell is protected from restriction due to methylase . [ 16 ] RM systems have evolved to keep up with the ever-changing bacteria and phage. In general, these RM types differ in the nucleotide sequences that they recognize. [ 18 ] However, there is an occasional slip where the endonuclease misses the DNA sequence of the phage and the phage DNA is able to enter the cell anyway, becoming methylated and protected against the endonuclease. This accident is what can spur the evolution of the RM system. Phages can acquire or use the enzyme from the host cell to protect their own DNA, or sometimes they will have proteins that dismantle the enzyme that is meant to restrict the phage DNA. [ 16 ] Another option is for the phage to insert different base pairs into its DNA, thereby confusing the enzyme.
Another mechanism employed by bacteria is referred to as CRISPR . This stands for “clustered regularly interspersed palindromic repeats” which means that the immunity to phages by bacteria has been acquired via adding spacers of DNA that are identical to that of the DNA from the phage. Some phages have been found to be immune to this mechanism as well. In some way or another, the phages have managed to get rid of the sequence that would be replicated.
A third way that bacteria have managed to escape the effects of bacteriophages is by abortive infection . This is a last resort option- when the host cell has already been infected by the phage. This method is not ideal for the host cell, as it still leads to its death. The redeeming feature of this mechanism is the fact that it interferes with the phage processes and prevents it from then moving on to infect other cells. [ 16 ]
On top of the above mentioned strategies, a growing arsenal of anti-phage immune systems has been described and quantified in bacteria. [ 19 ]
Phages are also capable of interacting with species other than bacteria, e.g., such as phage-encoded exotoxin interaction with animals . [ 20 ] Phage therapy is an example of applied phage community ecology. [ citation needed ]
An ecosystem consists of both the biotic and abiotic components of an environment. Abiotic entities are not alive and so an ecosystem essentially is a community combined with the non-living environment within which that ecosystem exists. Ecosystem ecology naturally differs from community ecology in terms of the impact of the community on these abiotic entities, and vice versa . In practice, the portion of the abiotic environment of most concern to ecosystem ecologists is inorganic nutrients and energy .
Phages impact the movement of nutrients and energy within ecosystems primarily by lysing bacteria. Phages can also impact abiotic factors via the encoding of exotoxins (a subset of which are capable of solubilizing the biological tissues of living animals [3] ). Phage ecosystem ecologists are primarily concerned with the phage impact on the global carbon cycle , especially within the context of a phenomenon known as the microbial loop . | https://en.wikipedia.org/wiki/Phage_ecology |
Phage typing is a phenotypic method that uses bacteriophages ("phages" for short) for detecting and identifying single strains of bacteria . [ 1 ] Phages are viruses that infect bacteria and may lead to bacterial cell lysis . [ 2 ] The bacterial strain is assigned a type based on its lysis pattern. [ 3 ] Phage typing was used to trace the source of infectious outbreaks throughout the 1900s, but it has been replaced by genotypic methods such as whole genome sequencing for epidemiological characterization. [ 1 ]
Phage typing is based on the specific binding of phages to antigens and receptors on the surface of bacteria and the resulting bacterial lysis or lack thereof. [ 4 ] The binding process is known as adsorption. [ 5 ] Once a phage adsorbs to the surface of a bacteria, it may undergo either the lytic cycle or the lysogenic cycle. [ 6 ]
Virulent phages enter the lytic cycle where they replicate and lyse the bacterial cell. [ 7 ] Virulent phages can differentiate between different species of bacteria based on their specific lytic action. [ 8 ] Lysis will only occur if the virulent phage adsorbs to the bacterial surface, configuring species specificity to phages. [ 5 ]
Temperate phages enter the lysogenic cycle and do not immediately lyse the cell. [ 7 ] The phage is instead integrated into the bacterial genome as a prophage during lysogenization, which protects the cell from being lysed by phages which are serologically identical or related. [ 9 ] Since it is incorporated into the genome, the prophage is also passed down to the bacteria's progenies. [ 7 ] The bacterial strain carrying the prophage is known as a lysogenic strain. [ 9 ] Lysogenization is strain-specific, so it allows for differentiation among different strains of bacteria within the same species. [ 10 ] The prophage may be chemically or physically induced to revert to the lytic pathway. [ 6 ]
The bacterial strain to be characterized is cultured on an agar Petri dish and dried. [ 11 ] Once dry, a grid or another recognizable pattern is drawn on the base to mark out different regions. [ 11 ] Each region is inoculated with a different phage at its routine test dilution and then incubated for 5-48 hours. [ 11 ] The susceptible phage regions will display a clearing where the bacteria have been lysed, and this is used in differentiation. [ 12 ] The size, morphology, and pattern of the lysed region are important criteria for differentiating bacterial species and strains. [ 13 ] They are compared against a standard scheme of lysis patterns to assign a type to the strain. [ 14 ]
Routine test dilution (RTD) is typically defined as the lowest phage dilution that still yields lysis of its host. [ 11 ] This technique prevents a phenomenon known as "lysis from without", which is bacterial lysis induced by high multiplicity phage adsorption rather than phage replication. [ 15 ]
The first reported use of bacteriophages to identify bacteria was in 1925 when Sonnenschein used typhoid and paratyphoid phages to diagnose typhoid. [ 16 ] In 1934, it was discovered that some strains of Salmonella typhi displayed Vi antigens on the surface. [ 17 ] This led to the isolation of Vi phages capable of lysing typhoid bacteria strains but only if they displayed the Vi antigen [ 18 ] enabling the differentiation of typhoid species expressing the Vi antigen and those which do not.
In 1938, Craigie and Yen adapted Vi phages by selective propagation and used them at their critical test dilutions to differentiate 11 types of B. typhosus . [ 19 ] In 1943, Felix and Callow extended the method to Salmonella paratyphi B . in 1943 and differentiated 12 types with 11 phages. [ 20 ] The International Committee for Enteric Phage Typing was established in 1947, and these phage typing methods were soon standardized. [ 21 ]
Improvements to the specificity of phage typing schemes were made throughout the next few decades. In 1959, Callow improved her initial scheme to differentiate 34 types of Salmonella typhimurium with 29 phages. [ 22 ] In 1977, this was extended to 207 types by Anderson at the Enteric Reference Laboratory in London. [ 22 ] Since then, phage typing schemes have been developed for Salmonella typhi , Salmonella paratyphi B. , Salmonella typhimurium , Shigella sonnei , Staphylococcus aureus, and Escherichia coli to name a few. [ 23 ] [ 24 ]
Phage typing requires the use of a comprehensive number of phages, so it is typically only used in reference laboratories. [ 25 ] It also relies on the interpretation of the individual lysis pattern and comparison to a standard which has led to conflicting results from different laboratories in the past. [ 26 ] Furthermore, bacteriophages mutate so reference phages must be maintained. [ 25 ]
Phages used for phage typing are generally isolated from the native habitats of the host bacterial strain. [ 27 ] These may include sewage, feces, soil, and water. [ 27 ] Temperate phages may be isolated from the bacterium itself since it is incorporated into the bacterial genome during lysogenization. [ 27 ] | https://en.wikipedia.org/wiki/Phage_typing |
A phagemid or phasmid is a DNA -based cloning vector , which has both bacteriophage and plasmid properties. [ 1 ] These vectors carry, in addition to the origin of plasmid replication, an origin of replication derived from bacteriophage. Unlike commonly used plasmids, phagemid vectors differ by having the ability to be packaged into the capsid of a bacteriophage, due to their having a genetic sequence that signals for packaging. Phagemids are used in a variety of biotechnology applications; for example, they can be used in a molecular biology technique called " phage display ". [ 2 ]
The term "phagemid" or "phagemids" was coined by a group of Soviet scientists, who discovered them, named them, and published the article in April 1984 in Gene magazine. [ 3 ]
A phagemid (plasmid + phage) is a plasmid that contains an f1 origin of replication from an f1 phage . [ 4 ] It can be used as a type of cloning vector in combination with filamentous phage M13 . A phagemid can be replicated as a plasmid, and also be packaged as single stranded DNA in viral particles. Phagemids contain an origin of replication (ori) for double stranded replication, as well as an f1 ori to enable single stranded replication and packaging into phage particles. [ 4 ] Many commonly used plasmids contain an f1 ori and are thus phagemids.
Similarly to a plasmid, a phagemid can be used to clone DNA fragments and be introduced into a bacterial host by a range of techniques, such as transformation and electroporation . However, infection of a bacterial host containing a phagemid with a 'helper' phage, for example VCSM13 or M13K07, provides the necessary viral components to enable single stranded DNA replication and packaging of the phagemid DNA into phage particles. The 'helper' phage infects the bacterial host by first attaching to the host cell's pilus and then, after attachment, transporting the phage genome into the cytoplasm of the host cell. Inside the cell, the phage genome triggers production of single stranded phagemid DNA in the cytoplasm. This phagemid DNA is then packaged into phage particles. The phage particles containing ssDNA are released from the bacterial host cell into the extracellular environment.
Filamentous phages retard bacterial growth but, contrasting with the lambda phage and the T7 phage , are not generally lytic . Helper phages are usually engineered to package less efficiently (via a defective phage origin of replication) [ 5 ] than the phagemid so that the resultant phage particles contain predominantly phagemid DNA. F1 Filamentous phage infection requires the presence of a pilus so only bacterial hosts containing the F-plasmid or its derivatives can be used to generate phage particles.
Prior to the development of cycle sequencing, phagemids were used to generate single stranded DNA template for sequencing purposes. Today phagemids are still useful for generating templates for site-directed mutagenesis . Detailed characterisation of the filamentous phage life cycle and structural features lead to the development of phage display technology, in which a range of peptides and proteins can be expressed as fusions to phage coat proteins and displayed on the viral surface. The displayed peptides and polypeptides are associated with the corresponding coding DNA within the phage particle and so this technique lends itself to the study of protein-protein interactions and other ligand/receptor combinations. | https://en.wikipedia.org/wiki/Phagemid |
A phageome is a community of bacteriophages and their metagenomes localized in a particular environment, similar to a microbiome . [ 1 ] [ 2 ] Phageome is a subcategory of virome , which is all of the viruses that are associated with a host or environment. [ 3 ] The term was first used in an article by Modi et al. in 2013 [ 4 ] and has continued to be used in scientific articles that relate to bacteriophages and their metagenomes. A bacteriophage , or phage for short, is a virus that can infect bacteria and archaea, and can replicate inside of them. Phages make up the majority of most viromes and are currently understood as being the most abundant organism. [ 5 ] Oftentimes scientists will look only at a phageome instead of a virome while conducting research. Variations due to many factors have also been explored such as diet, age, and geography. The phageome has been studied in humans in connection with a wide range of disorders of the human body, including IBD, IBS, and colorectal cancer. [ 6 ]
Although bacteriophages cannot infect human cells, they are found in abundance in the human virome . [ 7 ] Phageome research in humans has largely focused on the gut, however it is also being investigated in other areas like the skin, [ 8 ] blood, [ 9 ] and mouth. [ 10 ] The composition of phages that make up a healthy human gut phageome is currently debated, since different methods of research can lead to different results. [ 11 ] At birth, the human phageome, and the overall virome in general, is almost non-existent. [ 12 ] The human phageome is thought to be brought about in newborns through prophage induction of bacteria passed on from the mother vaginally during birth. [ 12 ] However, phages can be introduced through breastfeeding, made evident through studies finding near-exact matches of crAssphage sequences between mother and child. [ 12 ] Variations in the human gut phageome continue across the lifespan. Siphoviridae and Myoviridae are the most abundant in infants and their numbers wane into childhood, whereas Crassvirales dominate in adults. [ 13 ] The phageome can also experience changes as a result of diet, which can introduce new phages present in our foods. [ 6 ] For example, in those with gluten-free diets, crAssphage were noted in higher abundance along with decreases in the families of Podoviridae . [ 13 ] Global geographical differences in phageome composition have been noted, with further variation found within individuals living in rural and urban locations. [ 13 ] For instance, residents in Hong Kong, China were found to have less phages associated with targeting pathogenic bacteria in comparison to those in Yunnan province. [ 14 ] Furthermore, residing for longer periods of time in urban regions correlated with increases of Lactobacillus and Lactococcus phages. [ 14 ]
Changes in the phageome have been seen in various disorders affecting the human body. In the gut, unique changes in the phageome have been described in both inflammatory bowel disease and irritable bowel syndrome . [ 12 ] Even further specific changes exist in subtypes of the two disorders. IBS subtypes of IBS-D and IBS-C saw increases in different species belonging to Microviridae and Myoviridae . [ 12 ] In Ulcerative colitis and Crohn's disease , which are subtypes of IBD, differences in levels of Caudovirales richness and species have been found. [ 15 ] Furthermore, phages that target Acinetobacter have been found in the blood of patients with Crohn's disease. [ 9 ] This is thought to occur due to the compromised, inflamed gut barrier allowing for bacteriophage transfer. [ 9 ] In the mouth, periodontitis has been associated with Myoviridae residing under the gums along with a currently unspecified bacteriophage in the Siphoviridae family. [ 10 ] Phageome changes have also been described in metabolic disorders including type-1 diabetes , type-2 diabetes and metabolic syndrome . In type-1 diabetes, overall shifts have been seen in Myoviridae and Podoviridae. [ 6 ] The genome of bacteriophages residing in the gut in Type-2 diabetes patients have been shown to contain numerous genes implicated in disease development. [ 6 ] Total phage representation in the virome is higher in individuals with Cardiovascular disease than healthy controls, totaling 63% and 18% respectively. [ 6 ] Lastly, researchers studying Colorectal cancer have observed increased richness in a variety of phage genera, with the most notable differences seen in Inovirus and Tunalikevirus. [ 13 ] | https://en.wikipedia.org/wiki/Phageome |
Phagocytosis (from Ancient Greek φαγεῖν (phagein) ' to eat ' and κύτος (kytos) ' cell ' ) is the process by which a cell uses its plasma membrane to engulf a large particle (≥ 0.5 μm), giving rise to an internal compartment called the phagosome . It is one type of endocytosis . A cell that performs phagocytosis is called a phagocyte .
In a multicellular organism's immune system , phagocytosis is a major mechanism used to remove pathogens and cell debris. The ingested material is then digested in the phagosome. Bacteria, dead tissue cells, and small mineral particles are all examples of objects that may be phagocytized. Some protozoa use phagocytosis as means to obtain nutrients. The two main cells that do this are the Macrophages and the Neutrophils of the immune system.
Where phagocytosis is used as a means of feeding and provides the organism part or all of its nourishment, it is called phagotrophy and is distinguished from osmotrophy , which is nutrition taking place by absorption. [ citation needed ]
The history of phagocytosis represents the scientific establishment of immunology as the process is the first immune response mechanism discovered and understood as such. [ 1 ] [ 2 ] The earliest definitive account of cell eating was given by Swiss scientist Albert von Kölliker in 1849. [ 3 ] In his report in Zeitschrift für Wissenschaftliche Zoologie, Kölliker described the feeding process of an amoeba-like alga, Actinophyrys sol (a heliozoan ) mentioning details of how the protist engulfed and swallowed (the process now called endocytosis) a small organism, that he named infusoria (a generic name for microbes at the time). [ 4 ]
The first demonstration of phagocytosis as a property of leucocytes, the immune cells, was from the German zoologist Ernst Haeckel . [ 5 ] [ 6 ] Haeckel discovered that blood cells of sea slug, Tethys , could ingest Indian ink (or indigo [ 7 ] ) particles. It was the first direct evidence of phagocytosis by immune cells. [ 5 ] [ 7 ] Haeckel reported his experiment in a 1862 monograph Die Radiolarien (Rhizopoda Radiaria): Eine Monographie. [ 8 ]
Phagocytosis was noted by Canadian physician William Osler (1876), [ 9 ] and later studied and named by Élie Metchnikoff (1880, 1883). [ 10 ]
Phagocytosis is one main mechanisms of the innate immune defense. It is one of the first processes responding to infection , and is also one of the initiating branches of an adaptive immune response. Although most cells are capable of phagocytosis, some cell types perform it as part of their main function. These are called 'professional phagocytes.' Phagocytosis is old in evolutionary terms, being present even in invertebrates . [ 11 ]
Neutrophils , macrophages , monocytes , dendritic cells , osteoclasts and eosinophils can be classified as professional phagocytes. [ 10 ] The first three have the greatest role in immune response to most infections. [ 11 ]
The role of neutrophils is patrolling the bloodstream and rapid migration to the tissues in large numbers only in case of infection. [ 11 ] There they have direct microbicidal effect by phagocytosis. After ingestion, neutrophils are efficient in intracellular killing of pathogens. Neutrophils phagocytose mainly via the Fcγ receptors and complement receptors 1 and 3. The microbicidal effect of neutrophils is due to a large repertoire of molecules present in pre-formed granules. Enzymes and other molecules prepared in these granules are proteases, such as collagenase , gelatinase or serine proteases , myeloperoxidase , lactoferrin and antibiotic proteins. Degranulation of these into the phagosome, accompanied by high reactive oxygen species production (oxidative burst) is highly microbicidal. [ 12 ]
Monocytes, and the macrophages that mature from them, leave blood circulation to migrate through tissues. There they are resident cells and form a resting barrier. [ 11 ] Macrophages initiate phagocytosis by mannose receptors , scavenger receptors , Fcγ receptors and complement receptors 1, 3 and 4. Macrophages are long-lived and can continue phagocytosis by forming new lysosomes. [ 11 ] [ 13 ]
Dendritic cells also reside in tissues and ingest pathogens by phagocytosis. Their role is not killing or clearance of microbes, but rather breaking them down for antigen presentation to the cells of the adaptive immune system. [ 11 ]
Receptors for phagocytosis can be divided into two categories by recognised molecules. The first, opsonic receptors, are dependent on opsonins . [ 14 ] Among these are receptors that recognise the Fc part of bound IgG antibodies, deposited complement or receptors, that recognise other opsonins of cell or plasma origin. Non-opsonic receptors include lectin-type receptors, Dectin receptor, or scavenger receptors. Some phagocytic pathways require a second signal from pattern recognition receptors (PRRs) activated by attachment to pathogen-associated molecular patterns (PAMPS), which leads to NF-κB activation. [ 10 ]
Fcγ receptors recognise IgG coated targets. The main recognised part is the Fc fragment . The molecule of the receptor contain an intracellular ITAM domain or associates with an ITAM-containing adaptor molecule. ITAM domains transduce the signal from the surface of the phagocyte to the nucleus. For example, activating receptors of human macrophages are FcγRI , FcγRIIA , and FcγRIII . [ 13 ] Fcγ receptor mediated phagocytosis includes formation of protrusions of the cell called a 'phagocytic cup' and activates an oxidative burst in neutrophils. [ 12 ]
These receptors recognise targets coated in C3b , C4b and C3bi from plasma complement. The extracellular domain of the receptors contains a lectin-like complement-binding domain. Recognition by complement receptors is not enough to cause internalisation without additional signals. In macrophages, the CR1 , CR3 and CR4 are responsible for recognition of targets. Complement coated targets are internalised by 'sinking' into the phagocyte membrane, without any protrusions. [ 13 ]
Mannose and other pathogen-associated sugars, such as fucose , are recognised by the mannose receptor. Eight lectin-like domains form the extracellular part of the receptor. The ingestion mediated by the mannose receptor is distinct in molecular mechanisms from Fcγ receptor or complement receptor mediated phagocytosis. [ 13 ]
Engulfment of material is facilitated by the actin-myosin contractile system. The phagosome is the organelle formed by phagocytosis of material. It then moves toward the centrosome of the phagocyte and is fused with lysosomes , forming a phagolysosome and leading to degradation. Progressively, the phagolysosome is acidified, activating degradative enzymes. [ 10 ] [ 15 ]
Degradation can be oxygen-dependent or oxygen-independent.
Leukocytes generate hydrogen cyanide during phagocytosis, and can kill bacteria , fungi , and other pathogens by generating several other toxic chemicals. [ 17 ] [ 18 ] [ 19 ]
Some bacteria, for example Treponema pallidum , Escheria coli and Staphylococcus aureus , are able to avoid phagocytosis by several mechanisms.
Following apoptosis , the dying cells need to be taken up into the surrounding tissues by macrophages in a process called efferocytosis . One of the features of an apoptotic cell is the presentation of a variety of intracellular molecules on the cell surface, such as calreticulin , phosphatidylserine (from the inner layer of the plasma membrane), annexin A1 , oxidised LDL and altered glycans . [ 20 ] These molecules are recognised by receptors on the cell surface of the macrophage such as the phosphatidylserine receptor or by soluble (free-floating) receptors such as thrombospondin 1 , GAS6 , and MFGE8 , which themselves then bind to other receptors on the macrophage such as CD36 and alpha-v beta-3 integrin . Defects in apoptotic cell clearance is usually associated with impaired phagocytosis of macrophages. Accumulation of apoptotic cell remnants often causes autoimmune disorders; thus pharmacological potentiation of phagocytosis has a medical potential in treatment of certain forms of autoimmune disorders. [ 21 ] [ 22 ] [ 23 ] [ 24 ]
Phagocytosis is used by many protists as a means of feeding, thus constituting phagotrophy.
As in phagocytic immune cells, the resulting phagosome may be merged with lysosomes ( food vacuoles ) containing digestive enzymes , forming a phagolysosome . The food particles will then be digested, and the released nutrients are diffused or transported into the cytosol for use in other metabolic processes. [ 26 ]
Mixotrophy can involve phagotrophic nutrition and phototrophic nutrition. [ 27 ] | https://en.wikipedia.org/wiki/Phagocytosis |
Phagophilia or phagophily is the behaviour of feeding on parasites . It is also an example of cleaning symbiosis . [ 1 ] [ 2 ]
Austrian arachnologist Max Beier reported on phagophilia in pseudoscorpions . Many pseudoscorpion species co-exist with packrat species, and two of them are known to feed on packrat ectoparasites , to mutual benefit. [ 3 ] [ 4 ]
This ethology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Phagophilia |
Phagoptosis (cell death by phagocytosis) is a type of cell death caused by the cell being phagocytosed (i.e. eaten) by another cell, and therefore this form of cell death is prevented by blocking phagocytosis . [ 1 ] [ 2 ]
Phagocytosis of an otherwise-viable cell may occur because the cell is recognised as stressed, activated, senescent, damaged, pathogenic or non-self, or is misrecognised. Cells are phagocytosed as a result of: i) expressing eat-me signals on their surface, ii) losing don’t-eat-me signals, and/or iii) binding of opsonins . It is clear that otherwise-viable cells can expose/bind such phagocytosis-promoting signals as a result of cell stress, activation or senescence. Phagoptosis is probably the most common form of cell death in the body as it is responsible for erythrocyte turnover. And there is increasing evidence that it mediates physiological death of neutrophils, T cells , platelets and stem cells , and thereby regulates inflammation, immunity, clotting and neurogenesis. Phagoptosis is a major form of host defence against pathogens and cancer cells. However, recent evidence indicates that excessive phagoptosis may kill host cells in inflammatory conditions, contributing to haemophagic conditions, and neuronal loss in the inflamed brain. [ 1 ]
Phagoptosis is normally caused by: the cell exposing on its surface so-called "eat-me" signals, and/or the cell no longer exposing "don't-eat-me" signals and/or the cell being opsonised i.e. binding soluble proteins that tag the cell for phagocytosis. For example, phosphatidylserine is an "eat-me" signal that, when exposed on the surface of a cell, triggers phagocytes (i.e. cells that eat other cells) to eat that cell. Phosphatidylserine is normally found on the inside of healthy cells, but can become exposed on the surface of dying, activated or stressed cells. Phagocytosis of such cells requires specific receptors on the phagocyte that recognise either phosphatidylserine directly or opsonins bound to the phosphatidylserine or other "eat-me" signals, such as calreticulin . "Don't-eat-me" signals include CD47 , which when expressed on the surface of a cell, inhibit phagocytosis of that cell, by activating SIRP-alpha receptors on the phagocyte. Opsonins are normally soluble proteins, which when bound to the surface of a cell induce phagocytes to phagocytose that cell. Opsonins include Mfge8 , Gas6 , Protein S , antibodies and complement factors C1q and C3b . [ 2 ]
Phagoptosis has multiple functions including removal and disposal of: pathogenic cells, aged cells, damaged cells, stressed cells and activated cells. Pathogenic cells such as bacteria can be opsonised by antibodies or complement factors, enabling their phagocytosis and phagoptosis by macrophages and neutrophils. "Aged" erythrocytes and neutrophils, as well as "activated" platelets, neutrophils and T-cells, are thought to be phagocytosed alive by macrophages.
Development. Phagoptosis removes excess cells during development in the worm, C. elegans . [ 3 ] [ 4 ] During mammalian development multiple cells undergo programmed cell senescence and are then phagocytosed by macrophages. [ 5 ] Brain macrophages (microglia) can regulate the number of neural precursor cells in the developing brain by phagocytosing these otherwise viable precursors and thus limiting neurogenesis. [ 6 ]
Turnover of blood cells. Red blood cells (erythrocytes) live for roughly 4 months in the blood before being phagocytosed by macrophages. Old erythrocytes do not die, but rather display changes in the cell surface that enable macrophages to recognise them as old or damaged, including exposure of phosphatidylserine, desialylation of glycoproteins, loss or changed conformation of the "don't-eat-me" signal CD47, and exposure of novel antigens that bind endogenous antibodies. [ 7 ] Neutrophils have a daily rhythm of entry and exit from the blood, driven by neutrophil “aging” in the circulation, causing decreased expression of CD62L and increased expression of CXCR4, which directs the “aged” neutrophils to the bone marrow, where they are phagocytosed by macrophages. [ 8 ] However, it is still unclear how or why neutrophils turnover at such an enormous rate. Antigen recognition causes phosphatidylserine exposure on activated T-cells, which is recognized by Tim-4 on macrophages, inducing phagoptosis of the activated T-cells, and thus the contraction phase of the adaptive response. [ 9 ]
Host defence against pathogens. Phagocytosis of otherwise-viable pathogens, such as bacteria, can be mediated by neutrophils, monocytes, macrophages, microglia and dendritic cells, and is central to host defence against pathogens. [ 10 ] Dendritic cells can phagocytose viable neutrophils, and present antigens derived from bacteria or cancer cell debris previously phagocytosed by the neutrophils. [ 11 ] Thus phagoptosis can contribute to host defence in a variety of ways.
Host defence against cancer. It has been known for some time that animals defend themselves against cancer by antibody-mediated or antibody-independent phagocytosis of viable tumour cells by macrophages. Recognition of viable cancer cells for phagocytosis may be based on the expression of novel antigens, senescence markers, phosphatidylserine or calreticulin. More recently it has become clear that most human cancer cells overexpress CD47 on their surface to prevent themselves being phagocytosed, and that if this ‘don’t-eat-me’ signalling is blocked then a variety of cancers can be cleared from the body. [ 12 ] Thus it would appear that phagoptosis is an important defence against cancer, but that tumour cells can suppress this, and blocking this suppression is an attractive therapeutic option.
Pathological phagoptosis of blood cells. Hemophagocytosis is a clinical condition, found in many infectious and inflammatory disorders, where activated macrophages have engulfed apparently viable blood cells, resulting in reduced white or red cell count (cytopenia). IFN-γ (and possibly other cytokines) appears to drive hemophagocytosis during infection by directly stimulating phagoptosis of blood cells by macrophages. [ 13 ] Hemophagocytic lymphohistiocytosis (HLH) is characterized by excessive engulfment of hematopoietic stem cells (HSCs) by bone marrow macrophages, and this has been found to result from down regulation of CD47 expression on HSCs, enabling macrophages to eat them alive. [ 14 ]
Pathological phagoptosis in the brain. Microglial phagocytosis of stressed-but-viable neurons occurs under inflammatory conditions, and may contribute to neuronal loss in brain pathologies [2]. | https://en.wikipedia.org/wiki/Phagoptosis |
In cell biology , a phagosome is a vesicle formed around a particle engulfed by a phagocyte via phagocytosis . Professional phagocytes include macrophages , neutrophils , and dendritic cells (DCs). [ 1 ]
A phagosome is formed by the fusion of the cell membrane around a microorganism , a senescent cell or an apoptotic cell . Phagosomes have membrane-bound proteins to recruit and fuse with lysosomes to form mature phagolysosomes . The lysosomes contain hydrolytic enzymes and reactive oxygen species (ROS) which kill and digest the pathogens . Phagosomes can also form in non-professional phagocytes, but they can only engulf a smaller range of particles, and do not contain ROS. The useful materials (e.g. amino acids ) from the digested particles are moved into the cytosol , and waste is removed by exocytosis . Phagosome formation is crucial for tissue homeostasis and both innate and adaptive host defense against pathogens.
However, some bacteria can exploit phagocytosis as an invasion strategy. They either reproduce inside of the phagolysosome ( e.g. Coxiella spp.) [ 2 ] or escape into the cytoplasm before the phagosome fuses with the lysosome (e.g. Rickettsia spp.). [ 3 ] Many Mycobacteria, including Mycobacterium tuberculosis [ 4 ] [ 5 ] and Mycobacterium avium paratuberculosis , [ 6 ] can manipulate the host macrophage to prevent lysosomes from fusing with phagosomes and creating mature phagolysosomes. Such incomplete maturation of the phagosome maintains an environment favorable to the pathogens inside it. [ 7 ]
Phagosomes are large enough to degrade whole bacteria, or apoptotic and senescent cells, which are usually >0.5μm in diameter. [ 8 ] This means a phagosome is several orders of magnitude bigger than an endosome , which is measured in nanometres .
Phagosomes are formed when pathogens or opsonins bind to a transmembrane receptor, which are randomly distributed on the phagocyte cell surface. Upon binding, "outside-in" signalling triggers actin polymerisation and pseudopodia formation, which surrounds and fuses behind the microorganism. Protein kinase C , phosphoinositide 3-kinase , and phospholipase C (PLC) are all needed for signalling and controlling particle internalisation. [ 9 ] More cell surface receptors can bind to the particle in a zipper-like mechanism as the pathogen is surrounded, increasing the binding avidity . [ 10 ] Fc receptor (FcR), complement receptors (CR), mannose receptor and dectin-1 are phagocytic receptors, which means that they can induce phagocytosis if they are expressed in non-phagocytic cells such as fibroblasts . [ 11 ] Other proteins such as Toll-like receptors are involved in pathogen pattern recognition and are often recruited to phagosomes but do not specifically trigger phagocytosis in non-phagocytic cells, so they are not considered phagocytic receptors.
Opsonins are molecular tags such as antibodies and complements that attach to pathogens and up-regulate phagocytosis. Immunoglobulin G (IgG) is the major type of antibody present in the serum . It is part of the adaptive immune system , but it links to the innate response by recruiting macrophages to phagocytose pathogens. The antibody binds to microbes with the variable Fab domain , and the Fc domain binds to Fc receptors (FcR) to induce phagocytosis.
Complement-mediated internalisation has much less significant membrane protrusions, but the downstream signalling of both pathways converge to activate Rho GTPases . [ 12 ] They control actin polymerisation which is required for the phagosome to fuse with endosomes and lysosomes.
Other non-professional phagocytes have some degree of phagocytic activity, such as thyroid and bladder epithelial cells that can engulf erythrocytes and retinal epithelial cells that internalise retinal rods. [ 8 ] However non-professional phagocytes do not express specific phagocytic receptors such as FcR and have a much lower rate of internalisation.
Some invasive bacteria can also induce phagocytosis in non-phagocytic cells to mediate host uptake. For example, Shigella can secrete toxins that alter the host cytoskeleton and enter the basolateral side of enterocytes . [ 13 ]
As the membrane of the phagosome is formed by the fusion of the plasma membrane, the basic composition of the phospholipid bilayer is the same. Endosomes and lysosomes then fuse with the phagosome to contribute to the membrane, especially when the engulfed particle is very big, such as a parasite . [ 14 ] They also deliver various membrane proteins to the phagosome and modify the organelle structure.
Phagosomes can engulf artificial low-density latex beads and then purified along a sucrose concentration gradient, allowing the structure and composition to be studied. [ 15 ] By purifying phagosomes at different time points, the maturation process can also be characterised. Early phagosomes are characterised by Rab5, which transition into Rab7 as the vesicle matures into late phagosomes.
The nascent phagosome is not inherently bactericidal. As it matures, it becomes more acidic from pH 6.5 to pH 4, and gains characteristic protein markers and hydrolytic enzymes. The different enzymes function at various optimal pH, forming a range so they each work in narrow stages of the maturation process. Enzyme activity can be fine-tuned by modifying the pH level, allowing for greater flexibility. The phagosome moves along microtubules of the cytoskeleton , fusing with endosomes and lysosomes sequentially in a dynamic "kiss-and-run" manner. [ 16 ] This intracellular transport depends on the size of the phagosomes. Larger organelles (with a diameter of about 3 μm) are transported very persistently from the cell periphery towards the perinuclear region whereas smaller organelles (with a diameter of about 1 μm) are transported more bidirectionally back and forth between cell center and cell periphery. [ 17 ] Vacuolar proton pumps (v-ATPase) are delivered to the phagosome to acidify the organelle compartment, creating a more hostile environment for pathogens and facilitating protein degradation. The bacterial proteins are denatured in low pH and become more accessible to the proteases, which are unaffected by the acidic environment. The enzymes are later recycled from the phagolysosome before egestion so they are not wasted. The composition of the phospholipid membrane also changes as the phagosome matures. [ 15 ]
Fusion may take minutes to hours depending on the contents of the phagosome; FcR or mannose receptor-mediated fusion last less than 30 minutes, but phagosomes containing latex beads may take several hours to fuse with lysosomes. [ 8 ] It is suggested that the composition of the phagosome membrane affects the rate of maturation. Mycobacterium tuberculosis has a very hydrophobic cell wall , which is hypothesised to prevent membrane recycling and recruitment of fusion factors, so the phagosome does not fuse with lysosomes and the bacterium avoids degradation. [ 18 ]
Smaller lumenal molecules are transferred by fusion faster than larger molecules, which suggests that a small aqueous channel forms between the phagosome and other vesicles during "kiss-and-run", through which only limited exchange is allowed. [ 8 ]
Shortly after internalisation, F-actin depolymerises from the newly formed phagosome so it becomes accessible to endosomes for fusion and delivery of proteins. [ 8 ] The maturation process is divided into early and late stages depending on characteristic protein markers, regulated by small Rab GTPases. Rab5 is present on early phagosomes, and controls the transition to late phagosomes marked by Rab7. [ 19 ]
Rab5 recruits PI-3 kinase and other tethering proteins such as Vps34 to the phagosome membrane, so endosomes can deliver proteins to the phagosome. Rab5 is partially involved in the transition to Rab7, via the CORVET complex and the HOPS complex in yeast. [ 19 ] The exact maturation pathway in mammals is not well understood, but it is suggested that HOPS can bind Rab7 and displace the guanosine nucleotide dissociation inhibitor (GDI). [ 20 ] Rab11 is involved in membrane recycling. [ 21 ]
The phagosome fuses with lysosomes to form a phagolysosome, which has various bactericidal properties. The phagolysosome contains reactive oxygen and nitrogen species (ROS and RNS) and hydrolytic enzymes. The compartment is also acidic due to proton pumps (v-ATPases) that transport H + across the membrane, used to denature the bacterial proteins.
The exact properties of phagolysosomes vary depending on the type of phagocyte. Those in dendritic cells have weaker bactericidal properties than those in macrophages and neutrophils. Also, macrophages are divided into pro-inflammatory "killer" M1 and "repair" M2. The phagolysosomes of M1 can metabolise arginine into highly reactive nitric oxide , while M2 uses arginine to produce ornithine to promote cell proliferation and tissue repair. [ 22 ]
Macrophages and neutrophils are professional phagocytes in charge of most of the pathogen degradation, but they have different bactericidal methods. Neutrophils have granules that fuse with the phagosome. The granules contain NADPH oxidase and myeloperoxidase , which produce toxic oxygen and chlorine derivatives to kill pathogens in an oxidative burst . Proteases and anti-microbial peptides are also released into the phagolysosome. Macrophages lack granules, and rely more on phagolysosome acidification, glycosidases , and proteases to digest microbes. [ 21 ] Phagosomes in dendritic cells are less acidic and have much weaker hydrolytic activity, due to a lower concentration of lysosomal proteases and even the presence of protease inhibitors.
Phagosome formation is tied to inflammation via common signalling molecules. PI-3 kinase and PLC are involved in both the internalisation mechanism and triggering inflammation. [ 9 ] The two proteins, along with Rho GTPases, are important components of the innate immune response, inducing cytokine production and activating the MAP kinase signalling cascade. Pro-inflammatory cytokines including IL-1β , IL-6 , TNFα , and IL-12 are all produced. [ 8 ]
The process is tightly regulated and the inflammatory response varies depending on the particle type within the phagosome. Pathogen-infected apoptotic cells will trigger inflammation, but damaged cells that are degraded as part of the normal tissue turnover do not. The response also differs according to the opsonin-mediated phagocytosis. FcR and mannose receptor-mediated reactions produce pro-inflammatory reactive oxygen species and arachidonic acid molecules, but CR-mediated reactions do not result in those products. [ 8 ]
Immature dendritic cells (DCs) can phagocytose, but mature DCs cannot due to changes in Rho GTPases involved in cytoskeleton remodelling. [ 21 ] The phagosomes of DCs are less hydrolytic and acidic than those of macrophages and neutrophils, as DCs are mainly involved in antigen presentation rather than pathogen degradation. They need to retain protein fragments of a suitable size for specific bacterial recognition, so the peptides are only partially degraded. [ 21 ] Peptides from the bacteria are trafficked to the Major Histocompatibility Complex (MHC). The peptide antigens are presented to lymphocytes , where they bind to T-cell receptors and activate T-cells , bridging the gap between innate and adaptive immunity. [ 9 ] This is specific to mammals , birds , and jawed fish, as insects do not have adaptive immunity. [ 23 ]
Ancient single-celled organisms such as amoeba use phagocytosis as a way to acquire nutrients, rather than an immune strategy. They engulf other smaller microbes and digest them within the phagosome of around one bacterium per minute, which is much faster than professional phagocytes. [ 24 ] For the soil amoeba Dictyostelium discoideum , their main food source is the bacteria Legionella pneumophila , which causes Legionnaire's disease in humans. [ 25 ] Phagosome maturation in amoeba is very similar to that in macrophages, so they are used as a model organism to study the process. [ 16 ]
Phagosomes degrade senescent cells and apoptotic cells to maintain tissue homeostasis. Erythrocytes have one of the highest turnover rates in the body, and they are phagocytosed by macrophages in the liver and spleen . In the embryo , the process of removing dead cells is not well-characterised, but it is not performed by macrophages or other cells derived from hematopoietic stem cells . [ 26 ] It is only in the adult that apoptotic cells are phagocytosed by professional phagocytes. Inflammation is only triggered by certain pathogen- or damage-associated molecular patterns (PAMPs or DAMPs), the removal of senescent cells is non-inflammatory. [ 14 ]
Autophagosomes are different from phagosomes in that they are mainly used to selectively degrade damaged cytosolic organelles such as mitochondria ( mitophagy ). However, when the cell is starved or stressed, autophagosomes can also non-selectively degrade organelles to provide the cell with amino acids and other nutrients. [ 27 ] Autophagy is not limited to professional phagocytes, it is first discovered in rat hepatocytes by cell biologist Christian de Duve . [ 28 ] Autophagosomes have a double membrane, the inner one from the engulfed organelle, and the outer membrane is speculated to be formed from the endoplasmic reticulum or the ER-Golgi Intermediate Compartment (ERGIC). [ 29 ] The autophagosome also fuses with lysosomes to degrade its contents. When M. tuberculosis inhibit phagosome acidification, Interferon gamma can induce autophagy and rescue the maturation process. [ 30 ]
Many bacteria have evolved to evade the bactericidal properties of phagosomes or even exploit phagocytosis as an invasion strategy. | https://en.wikipedia.org/wiki/Phagosome |
Phalanx Biotech Group was founded in 2002 as a result of collaboration between Taiwan 's Industrial Technology Research Institute ( ITRI ) and several private companies and research institutes. [ 1 ] It is a manufacturer of DNA microarrays and a provider of gene expression profiling and microRNA profiling services based in Hsinchu , Taiwan , San Diego , California , Shanghai , China , and in Beijing , China . The company sells its DNA microarrays and service platform under the registered trademark name OneArray. [ 2 ]
Phalanx Biotech Group is a member of the FDA-led Microarray Quality Control Project. [ 3 ] [ 4 ]
Phalanx Biotech Group is a manufacturer and provider of DNA microarray products and services used for gene expression profiling and miRNA profiling.
Human, Mouse, Rat and Yeast whole genome OneArray DNA microarrays are manufactured and used for gene expression profiling products and services.
The miRNA profiling products and services include miRNA OneArray microarrays and related services for Human, Rodent, and many Model organism and Plant species.
Other than the OneArray services, Phalanx also offers Agilent microarray services, qPCR services, PCR array profiling services, and NGS services. Each one of these services can be accompanied by an extensive, customizable bioinformatics package.
The DNA microarrays are produced using a patented non-contact inkjet deposition [ 5 ] of intact oligonucleotides . This is performed using a patented inkjet dispensing apparatus. [ 6 ] [ 7 ] The oligonucleotides are deposited on a standard size 25mm X 75mm glass slide. | https://en.wikipedia.org/wiki/Phalanx_Biotech_Group |
Phanes are abstractions of highly complex organic molecules introduced for simplification of the naming of these highly complex molecules.
Systematic nomenclature of organic chemistry consists of building a name for the structure of an organic compound by a collection of names of its composite parts but describing also its relative positions within the structure. Naming information is summarised by IUPAC: [ 1 ] [ 2 ] [ 3 ]
"Phane nomenclature is a new method for building names for organic structures by assembling names that describe component parts of a complex structure. It is based on the idea that a relatively simple skeleton for a parent hydride can be modified by an operation called 'amplification', a process that replaces one or more special atoms ( superatoms ) of a simplified skeleton by multiatomic structures".
Whilst the cyclophane name describes only a limited number of sub-structures of benzene rings interconnected by individual atoms or chains, 'phane' is a class name which includes others, hence heterocyclic rings as well. Therefore, the various cyclophanes are perfectly good for the general class of phanes as well keeping in mind that the cyclic structures in phanes could have much greater diversity.
This organic chemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Phanes_(organic_chemistry) |
Phang Siew Moi is a full professor at the Department of Biotechnology, Faculty of Applied Science, UCSI University . [ 1 ] She is a leading expert in algal biotechnology and utilization, particularly converting algae into biodiesel . [ 2 ] [ 3 ] [ 4 ]
Dr. Phang Siew Moi, FASc is a distinguished professor and Deputy Vice-Chancellor (Research and Postgraduate) at UCSI University. [ 1 ] She was formerly a full professor at the Institute of Biological Sciences, Faculty of Science, University of Malaya . She was also the founding director of Institute of Ocean & Earth Science; [ 5 ] currently, Dr. Phang is an Honorary Advisor and professor emerita . [ 6 ]
She has been featured in a special edition of the Stay Hungry, Stay Foolish documentary series on Astro AEC . [ 7 ]
Dr. Phang won the Newton Prize in 2017 for her work on developing an integrated microbial fuel cell prototype using tropical algae from wastewater . She was also on Stanford University ’s World’s Top 2% of Scientists for her work in 2021. [ 8 ]
Phang Siew Moi has published papers in phycology , algae biotechnology , and seaweed biotechnology. [ 9 ] Examples include:
This biography of an academic is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Phang_Siew_Moi |
Phantom power , in the context of professional audio equipment , is DC electric power equally applied to both signal wires in balanced microphone cables, forming a phantom circuit , to operate microphones that contain active electronic circuitry. [ 1 ] It is best known as a convenient power source for condenser microphones , though many active direct boxes also use it. The technique is also used in other applications where power supply and signal communication take place over the same wires.
Phantom power supplies are often built into mixing consoles , microphone preamplifiers and similar equipment. In addition to powering the circuitry of a microphone, traditional condenser microphones also use phantom power for polarizing the microphone's transducer element.
Phantom powering was first used for copper wire-based telephone landlines since the introduction of the rotary dial telephone in 1919. One such application in the telephone system was to provide a DC signalling path around transformer-connected amplifiers such as analogue line transmission systems.
The first known commercially available phantom-powered microphone was the Schoeps model CMT 20, which came out in 1964, built to the specifications of French radio with 9–12 volt DC phantom power; the positive pole of this powering was grounded. Microphone preamplifiers of the Nagra IV-series tape recorders offered this type of powering as an option for many years and Schoeps continued to support "negative phantom" until the CMT series was discontinued in the mid-1970s, but it is obsolete now.
In 1966, Neumann GmbH presented a new type of transistorized microphone to the Norwegian Broadcasting Corporation , NRK. Norwegian Radio had requested phantom-powered operation. Since NRK already had 48-volt power available in their studios for their emergency lighting systems, this voltage was used for powering the new microphones (model KM 84), and is the origin of 48-volt phantom power. This arrangement was later standardized in DIN 45596.
The International Electrotechnical Commission Standards Committee's "Multimedia systems – Guide to the recommended characteristics of analogue interfaces to achieve interoperability" (IEC 61938:2018) specifies parameters for microphone phantom power delivery. [ 2 ] Three variants are defined by the document: P12, P24 and P48. In addition, two additional variants (P12L and SP48) are mentioned for specialized applications. [ 3 ] [ 4 ] Most microphones now use the P48 standard (maximum available power is 240 mW). Although 12 and 48-volt systems are still in use, the standard recommends a 24-volt supply for new systems. [ 5 ]
Phantom powering consists of a phantom circuit where direct current is applied equally through the two signal lines of a balanced audio connector (in modern equipment, both pins 2 and 3 of an XLR connector ). The supply voltage is referenced to the ground pin of the connector (pin 1 of an XLR), which normally is connected to the cable shield or a ground wire in the cable or both. When phantom powering was introduced, one of its advantages was that the same type of balanced, shielded microphone cable that studios were already using for dynamic microphones could be used for condenser microphones. This is in contrast to microphones with vacuum-tube circuitry, most of which require special, multi-conductor cables. [ a ]
With phantom power, the supply voltage is effectively invisible to balanced microphones that do not use it, which includes most dynamic microphones. A balanced signal consists only of the differences in voltage between two signal lines; phantom powering places the same DC voltage on both signal lines of a balanced connection. This is in marked contrast to another, slightly earlier method of powering known as "parallel powering" or "T-powering" (from the German term Tonaderspeisung ), in which DC was overlaid directly onto the signal in differential mode. Connecting a conventional microphone to an input that had parallel powering enabled could very well damage the microphone.
The IEC 61938 Standard defines 48-volt, 24-volt, and 12-volt phantom powering. The signal conductors are positive, both fed through resistors of equal value (6.81 kΩ for 48 V, 1.2 kΩ for 24 V, and 680 Ω for 12 V), and the shield is ground . The 6.81 kΩ value is not critical, but the resistors must be matched to within 0.1% [ 6 ] or better to maintain good common-mode rejection in the circuit. The 24-volt version of phantom powering, proposed quite a few years after the 12 and 48 V versions, was also included in the DIN standard and is in the IEC standard, but it was never widely adopted by equipment manufacturers.
Nearly all modern mixing consoles have a switch for turning phantom power on or off; in most high-end equipment this can be done individually by channel, while on smaller mixers a single master switch may control power delivery to all channels. Phantom power can be blocked in any channel with a 1:1 isolation transformer or blocking capacitors. Phantom powering can cause equipment malfunction or even damage if used with cables or adapters that connect one side of the input to ground, or if certain equipment other than microphones is connected to it.
Instrument amplifiers rarely provide phantom power. To use equipment requiring it with these amplifiers, a separate power supply must be inserted into the line. These are readily available commercially, or alternatively are one of the easier projects for the amateur electronics constructor.
Some microphones offer a choice of internal battery powering or (external) phantom powering. In some such microphones, it is advisable to remove the internal batteries when phantom power is being used since batteries may corrode and leak chemicals. Other microphones are specifically designed to switch over to the internal batteries if an external supply fails.
Phantom powering is not always implemented correctly or adequately, even in professional-quality preamps, mixers, and recorders. In part, this is because first-generation (late-1960s through mid-1970s) 48-volt phantom-powered condenser microphones had simple circuitry and required only small amounts of operating current (typically less than 1 mA per microphone), so the phantom supply circuits typically built into recorders, mixers, and preamps of that time were designed on the assumption that this current would be adequate. The original DIN 45596 phantom-power specification called for a maximum of 2 mA. This practice has carried forward to the present; many 48-volt phantom power supply circuits, especially in low-cost and portable equipment, simply cannot supply more than 1 or 2 mA total without breaking down. Some circuits also have significant additional resistance in series with the standard pair of supply resistors for each microphone input; this may not affect low-current microphones much, but it can disable microphones that need more current.
Mid-1970s and later condenser microphones designed for 48-volt phantom powering often require much more current (e.g., 2–4 mA for Neumann transformerless microphones, 4–5 mA for the Schoeps CMC ("Colette") series and Josephson microphones, 5–6 mA for most Shure KSM-series microphones, 8 mA for CAD Equiteks and 10 mA for Earthworks). The IEC standard gives 10 mA as the maximum allowed current per microphone. If its required current is not available, a microphone may still put out a signal, but it cannot deliver its intended level of performance. The specific symptoms vary somewhat, but the most common result will be reduction of the maximum sound pressure level that the microphone can handle without overload ( distortion ). Some microphones will also show lower sensitivity (output level for a given sound-pressure level).
Most ground lift switches have the unwanted effect of disconnecting phantom power. There must always be a DC current path between pin 1 of the microphone and the negative side of the 48-volt supply if power is to reach the microphone's electronics. Lifting the ground, which is normally pin 1, breaks this path and disables the phantom power supply.
There is a common belief that connecting a dynamic or ribbon microphone to a phantom-powered input will damage it. There are three possibilities for this damage to occur. If there is a fault in the cable, phantom power may damage some mics by applying a voltage across the output of the microphone. [ 7 ] Equipment damage is also possible if a phantom-powered input connected to an unbalanced dynamic microphone [ 8 ] or electronic musical instruments. [ 9 ] The transient generated when a microphone is hot-plugged into an input with active phantom power can damage the microphone and possibly the preamp circuit of the input [ 10 ] because not all pins of the microphone connector make contact at the same time, and there is an instant when current can flow to charge the capacitance of the cable from one side of the phantom-powered input and not the other. This is particularly a problem with long microphone cables. It is considered good practice to disable phantom power to devices that don't require it. [ 11 ] [ 12 ]
Digital microphones complying with the AES 42 standard may be provided with phantom power at 10 volts, impressed on both audio leads and ground. This supply can furnish up to 250 mA to digital microphones. A keyed variation of the usual XLR connector , the XLD connector , may be used to prevent accidental interchange of analog and digital devices. [ 13 ]
T-power, also known as A-B powering [ 14 ] or T12, described in DIN 45595, is an alternative to phantom powering that is still widely used in the world of production film sound. Many mixers and recorders intended for that market have a T-power option. [ citation needed ] The method is considered obsolete as power supply noise is added to the output audio signal. [ 15 ] Many older Sennheiser and Schoeps microphones use this powering method, although newer recorders and mixers are phasing out this option. Adapter barrels, and dedicated power supplies, are made to accommodate T-powered microphones. In this scheme, 12 volts is applied through 180 ohm resistors between the microphone's "hot" terminal (XLR pin 2) and the microphone's "cold" terminal (XLR pin 3). This results in a 12-volt potential difference with significant current capability across pins 2 and 3, which would likely cause permanent damage if applied to a dynamic or ribbon microphone.
Plug-in-power (PiP) is the low-current 3–5 V supply provided at the microphone jack of some consumer equipment, such as portable recorders and computer sound cards . It is also defined in IEC 61938. [ 16 ] It is unlike phantom power since it is an unbalanced interface with a low voltage (around +5 volts) connected to the signal conductor with return through the sleeve; the DC power is in common with the audio signal from the microphone. A capacitor is used to block the DC from subsequent audio frequency circuits. It is often used for powering electret microphones , which will not function without power. It is suitable only for powering microphones specifically designed for use with this type of power supply. Damage may result if these microphones are connected to true (48 V) phantom power through a 3.5 mm to XLR adapter that connects the XLR shield to the 3.5 mm sleeve. [ 17 ] Plug-in-power is covered by Japanese standard CP-1203A:2007. [ 18 ]
These alternative powering schemes are sometimes improperly referred to as "phantom power" and should not be confused with true 48-volt phantom powering described above.
Some condenser microphones can be powered with a 1.5-volt cell contained in a small compartment in the microphone or in an external housing.
Phantom power is sometimes used by workers in avionics to describe the DC bias voltage used to power aviation microphones, which use a lower voltage than professional audio microphones. Phantom power used in this context is 8–16 volts DC in series with a 470 ohm (nominal) resistor as specified in RTCA Inc. standard DO-214. [ 19 ] These microphones evolved from the carbon microphones used in the early days of aviation and the telephone which relied on a DC bias voltage across the carbon microphone element.
Phantom power is also used in applications other than microphones: | https://en.wikipedia.org/wiki/Phantom_power |
Pharmaceutical Research and Manufacturers of America ( PhRMA , pronounced /ˈfɑrmə/ ), formerly known as the Pharmaceutical Manufacturers Association, [ 1 ] is an American trade group representing companies in the pharmaceutical industry . Founded in 1958, PhRMA lobbies on behalf of pharmaceutical companies. [ 2 ] [ 3 ] PhRMA is headquartered in Washington, D.C. [ 1 ]
The organization has lobbied fiercely against allowing Medicare to negotiate drug prices for Medicare recipients, [ 4 ] and filed lawsuits against the drug price provisions in the Inflation Reduction Act . [ 5 ] At the state level, the organization has lobbied to prevent price limits and greater price transparency for drugs. [ 6 ] The organization claims that higher prices incentivize research and development , even though pharmaceutical spending on marketing exceeds that spent on research, [ 7 ] including off-label promotion that has resulted in settlements in the billions of dollars . [ 8 ]
PhRMA has given substantial dark money donations to right-wing advocacy groups such as the American Action Network (which lobbied heavily against the Affordable Care Act ), Americans for Prosperity , and Americans for Tax Reform . [ 9 ]
The organization has also lobbied against lowering drug prices internationally. The most visible conflict has been over AIDS drugs in Africa . Despite the role that patents have played in maintaining higher drug costs for public health programs across Africa, the organization worked to minimize the effect of the Doha Declaration , which said that TRIPS should not prevent countries from dealing with public health crises and allowed for compulsory licenses . [ 10 ] [ 11 ] The organization also opposed a World Trade Organization TRIPS Agreement waiver during the COVID-19 pandemic , which would have reduced the price of COVID-19 vaccines for low-income countries. [ 12 ] [ 13 ]
Daniel O'Day, Chairman and Chief Executive Officer of Gilead Sciences is chairman of the PhRMA board. Albert Bourla , DVM, PhD, Chairman and Chief Executive Officer of Pfizer , is board chair-elect and Paul Hudson , Chief Executive Officer of Sanofi , is board treasurer. [ 14 ]
Since 2015, the president of the organization has been Stephen J. Ubl. Previous leadership includes: John J. Castellani , formerly head of the Business Roundtable , a U.S. advocacy and lobbying group, [ 15 ] Billy Tauzin , a former Republican congressman from Louisiana, and John J. Horan , former CEO and chairman of Merck & Co. [ 16 ] [ 17 ] [ 18 ] [ 19 ]
Current member companies include Alkermes , Amgen , Astellas Pharma , Bayer , Biogen , BioMarin Pharmaceutical , Boehringer Ingelheim , Bristol Myers Squibb , CSL Behring , Daiichi Sankyo , Eisai , Eli Lilly and Company , EMD Serono , Genentech , Genmab , Gilead Sciences , GlaxoSmithKline , Incyte , Ipsen , Johnson & Johnson , Lundbeck , Merck & Co. , Neurocrine Biosciences , Novartis , Novo Nordisk , Otsuka Pharmaceutical , Pfizer , Sage Therapeutics, Sanofi , Takeda Pharmaceutical Company , and UCB . [ 20 ] [ 21 ] [ 22 ] [ 23 ]
SMARxT Disposal is a joint program run by the U.S. Fish and Wildlife Service , the American Pharmacists Association , and PhRMA to encourage consumers to properly dispose of unused medicines to avoid harm to the environment. [ 24 ]
The Partnership for Prescription Assistance is a program by PhRMA and its member companies that connects patients in-need with information on low-cost and free prescription medication. [ 24 ] PhRMA has in 2017 raised concerns over price increases for generic drugs out of patent by the company Marathon Pharmaceuticals over Duchenne muscular dystrophy treatment. [ 25 ]
The company has advocated abroad in South Africa regarding pharmaceutical drug intellectual property rules. [ 26 ]
In 2017, the organization had revenue of $455 million, $128 million of which was spent on lobbying activities. [ 27 ]
The organization has notably opposed market pricing strategies of Valeant Pharmaceuticals , deriding the firm as having a strategy "reflective of a hedge fund ". [ 28 ]
In January 2018, the organization introduced the "Let's Talk About Cost" website, which made the argument that much of the cost of medication goes to middlemen unassociated with pharmaceutical companies. [ 29 ] [ 27 ] | https://en.wikipedia.org/wiki/Pharmaceutical_Research_and_Manufacturers_of_America |
Pharmaceutical bioinformatics is a research field related to bioinformatics but with the focus on studying biological and chemical processes in the pharmaceutical area; to understand how xenobiotics interact with the human body and the drug discovery process.
Whereas traditional bioinformatics is a wide subject it has a large focus on molecular biology , pharmaceutical bioinformatics more specifically targets chemical-biological interaction and exploratory focus of chemical and biological interactors using e.g. cheminformatics and chemometrics methods. Methods include, apart from many general bioinformatics methods, ligand-based modeling such as Quantitative structure–activity relationship (QSAR) and proteochemometrics, computer-aided molecular design, chembioinformatics databases, algorithms for chemical software, and biopharmaceutical chemistry including analyses of biological activity and other issues related to drug discovery.
One of the major fields within pharmaceutical bioinformatics is the in silico metabolism prediction of drug candidates. This field is in turn divided into three tasks;
There are several existing tools trying to solve these tasks, e.g. SMARTCyp [ 1 ] and MetaPrint2D [ 2 ] predicts the SOM for chemical compounds.
There are many software tools for pharmaceutical bioinformatics. An example of an open source tool is the Bioclipse workbench.
One conference specific to Pharmaceutical Bioinformatics is "International Conference on Pharmaceutical Bioinformatics" (ICPB) ( http://www.icpb.net ) | https://en.wikipedia.org/wiki/Pharmaceutical_bioinformatics |
The distribution of medications has special drug safety and security considerations. [ 1 ] Some drugs require cold chain management in their distribution. [ 2 ]
The industry uses track and trace technology , though the timings for implementation and the information required vary across different countries, with varying laws and standards. [ citation needed ]
Because governments regulate access to drugs, governments control drug distribution and the drug supply chain more than trade for other goods. [ 3 ] Distribution begins with the pharmaceutical industry manufacturing drugs. [ 3 ] From there, intermediaries in the public sector , private sector , and non-governmental organizations acquire drugs to provide them to other intermediaries. [ 3 ] Eventually, the drugs reach different classes of consumers who use them. [ 3 ]
Good distribution practice (GDP) is a quality warranty system, which includes requirements for purchase, receiving, storage and export of drugs intended for human consumption. It regulates the division and movement of pharmaceutical products from the premises of the manufacturer of medicinal products, or another central point, to the end user thereof, or to an intermediate point by means of various transport methods, via various storage and/or health establishments.
In 2011, Argentina introduced a catalogue of drugs covered by its national drug traceability scheme, listing more than 3,000 drugs that require the placing of unique serial numbers and tamper-evident features on the secondary packaging. [ citation needed ] The drugs listed are recorded in real time in a central database managed by the National Administration of Drugs, Foods, Medical Devices of Argentina (ANMAT), Regulation 3683, which uses Global Location Numbers (GLNs) to identify the various actors in the supply chain. The purpose of this program is to actively limit the use of illegal drugs. [ 4 ]
The 2009 Brazilian Federal Law 11.903 and subsequent regulations of the National Agency for Sanitary Surveillance in Brazil (ANVISA) require that a 2D data matrix code be put on all secondary packaging. Under these provisions, manufacturers will be required to maintain a database of all transactions from manufacturing to dispensing, while distributors must report serialized transaction data to the manufacturer and keep a database of suppliers, medicine recipients, and packing companies. [ 5 ]
Data Element – National Number, Expiration Date, Batch/Lot Number, Serial Number [ 6 ]
In 2008, China’s State Food and Drug Administration (CFDA) made serialization mandatory for over 275 therapeutic classes of individual saleable product units by December 2015. The CFDA does not follow an international standard. Manufacturers may only register their products and obtain their serial numbers by applying to the China Product Identification, Authentication and Tracking System (PIATS). They must also implement a quality control system with an electronic drug-monitoring system, a standardized documentation system, and bar codes to ensure pharmaceutical traceability. Companies importing drugs into China must designate a local pharmaceutical company or wholesaler as their electronic monitoring agent in the country.
In addition to legislative reforms, China has increased enforcement efforts at the provincial and local levels. In 2013, the Chinese government coordinated joint special enforcement campaigns targeting counterfeit drugs. [ 7 ] China regulations are currently on hold.
In Europe GDP is based on the Commission Directive (EU) 2017/1572 of 15 September 2017 supplementing Directive 2001/83/EC of the European Parliament and of the Council as regards the principles and guidelines of good manufacturing practice for medicinal products for human use.
In 2016, the European Medicines Agency adopted the Falsified Medicines Directive (FMD), which requires all pharmaceutical products sold in the EU to feature obligatory “safety features.” This directive is scheduled to launch in the first quarter of 2019. By February 9, 2019, all pharmaceutical companies will be required to connect their internal systems to the EU data repository, which contains the product master data and batch information. This will allow pharmacists and consumers to authenticate their medicines. [ citation needed ]
In the US, Good Manufacturing Practice (GMP) Regulations are based on the Code of Federal Regulations 21 CFR 210/211, and USP 1079.
The US Drug Supply and Chain Security Act (DSCSA), was enacted by Congress on November 26, 2013 and outlines requirements to build electronic systems that identify and trace prescription drugs distributed in the US. [ 8 ] By November 27th 2023, full electronic track & trace capability will be required for all partners in the supply chain. [ 9 ]
An illegal drug trade operates to distribute illegal drugs. The trade of illegal drugs overlaps with trade in contraband of all sorts. [ 10 ] [ 11 ] Illegal drug distribution does not overlap in obvious ways with the legal trade of legal drugs. [ 12 ] | https://en.wikipedia.org/wiki/Pharmaceutical_distribution |
Pharmaceutical formulation , in pharmaceutics , is the process in which different chemical substances, including the active drug , are combined to produce a final medicinal product . The word formulation is often used in a way that includes dosage form .
Formulation studies involve developing a preparation of the drug which is both stable and acceptable to the patients. For orally administered drugs, this usually involves incorporating the drug into a tablet or a capsule . It is important to make the distinction that a tablet contains a variety of other potentially inert substances apart from the drug itself, and studies have to be carried out to ensure that the encapsulated drug is compatible [ 1 ] with these other substances in a way that does not cause harm, whether direct or indirect.
Preformulation involves the characterization of a drug's physical, chemical, and mechanical properties in order to choose what other ingredients ( excipients ) should be used in the preparation. In dealing with protein pre-formulation, the important aspect is to understand the solution behavior of a given protein under a variety of stress conditions such as freeze/thaw, temperature, shear stress among others to identify mechanisms of degradation and therefore its mitigation. [ 2 ]
Formulation studies then consider such factors as particle size , polymorphism , pH , and solubility , [ 3 ] [ 1 ] as all of these can influence bioavailability and hence the activity of a drug. The drug must be combined with inactive ingredients by a method that ensures that the quantity of drug present is consistent in each dosage unit e.g. each tablet. The dosage should have a uniform appearance, with an acceptable taste, tablet hardness, and capsule disintegration.
It is unlikely that formulation studies will be complete by the time clinical trials commence. This means that simple preparations are developed initially for use in phase I clinical trials . These typically consist of hand-filled capsules containing a small amount of the drug and a diluent . Proof of the long-term stability of these formulations is not required, as they will be used (tested) in a matter of days. Consideration has to be given to what is known as "drug loading" - the ratio of the active drug to the total contents of the dose. A low drug load may cause homogeneity problems. A high drug load may pose flow problems or require large capsules if the compound has a low bulk density .
By the time phase III clinical trials are reached, the formulation of the drug should have been developed to be close to the preparation that will ultimately be used in the market. A knowledge of stability is essential by this stage, and conditions must have been developed to ensure that the drug is stable in the preparation. If the drug proves unstable, it will invalidate the results from clinical trials since it would be impossible to know what the administered dose actually was. Stability studies are carried out to test whether temperature , humidity , oxidation , or photolysis ( ultraviolet light or visible light ) have any effect, and the preparation is analysed to see if any degradation products have been formed.
Formulated drugs are stored in container closure systems for extended periods of time. These include blisters, bottles, vials, ampules, syringes, and cartridges. The containers can be made from a variety of materials including glass, plastic, and metal. The drug may be stored as a solid, liquid, or gas.
It's important to check whether there are any undesired interactions between the preparation and the container. For instance, if a plastic container is used, tests are carried out to see whether any of the ingredients become adsorbed on to the plastic, and whether any plasticizer , lubricants , pigments , or stabilizers leach out of the plastic into the preparation. Even the adhesives for the container label need to be tested, to ensure they do not leach through the plastic container into the preparation.
The drug form varies by the route of administration .
Like capsules, tablets, and pills etc.
Oral drugs are normally taken as tablets or capsules.
The drug ( active substance ) itself needs to be soluble in aqueous solution at a controlled rate. Such factors as particle size and crystal form can significantly affect dissolution . Fast dissolution is not always ideal. For example, slow dissolution rates can prolong the duration of action or avoid initial high plasma levels. Treatment of active ingredient by special ways such as spherical crystallization [ 4 ] can have some advantages for drug formulation.
A tablet is usually a compressed preparation that contains:
The dissolution time can be modified for a rapid effect or for sustained release .
Special coatings can make the tablet resistant to the stomach acids such that it only disintegrates in the duodenum , jejunum and colon as a result of enzyme action or alkaline pH .
Pills can be coated with sugar , varnish , or wax to disguise the taste . Pharmaceutical ingredients such as APIs can also be coated with a ResonantAcoustic mixer for controlled release and taste-masking. [ 5 ]
A capsule is a gelatinous envelope enclosing the active substance. Capsules can be designed to remain intact for some hours after ingestion in order to delay absorption . They may also contain a mixture of slow and fast
release particles to produce rapid and sustained absorption in the same dose .
There are a number of methods by which tablets and capsules can be modified in order to allow for sustained release of the active compound as it moves through the digestive tract . One of the most common methods is to embed the active ingredient in an insoluble porous matrix, such that the dissolving drug must make its way out of the matrix before it can be absorbed. In other sustained release formulations the matrix swells to form a gel through which the drug exits.
Another method by which sustained release is achieved is through an osmotic controlled-release oral delivery system , where the active compound is encased in a water-permeable membrane with a laser drilled hole at one end. As water passes through the membrane the drug is pushed out through the hole and into the digestive tract where it can be absorbed.
These are also called injectable formulations and are used with intravenous , subcutaneous , intramuscular , and intra-articular administration. The drug is stored in liquid or if unstable, lyophilized form.
Many parenteral formulations are unstable at higher temperatures and require storage at refrigerated or sometimes frozen conditions. The logistics process of delivering these drugs to the patient is called the cold chain . The cold chain can interfere with delivery of drugs, especially vaccines, to communities where electricity is unpredictable or nonexistent. NGOs like the Gates Foundation are actively working to find solutions. These may include lyophilized formulations which are easier to stabilize at room temperature.
Most protein formulations are parenteral due to the fragile nature of the molecule which would be destroyed by enteric administration. Proteins have tertiary and quaternary structures that can be degraded or cause aggregation at room temperature. This can impact the safety and efficacy of the medicine. [ 6 ]
Liquid drugs are stored in vials , IV bags, ampoules , cartridges, and prefilled syringes.
As with solid formulations, liquid formulations combine the drug product with a variety of compounds to ensure a stable active medication following storage. These include solubilizers, stabilizers, buffers , tonicity modifiers, bulking agents, viscosity enhancers/reducers, surfactants , chelating agents , and adjuvants and also lipid-based carriers. [ 7 ]
If concentrated by evaporation , the drug may be diluted before administration. For IV administration , the drug may be transferred from a vial to an IV bag and mixed with other materials.
Lyophilized drugs are stored in vials, cartridges, dual chamber syringes, and prefilled mixing systems.
Lyophilization , or freeze drying , is a process that removes water from a liquid drug creating a solid powder, or cake. The lyophilized product is stable for extended periods of time and could allow storage at higher temperatures. In protein formulations, stabilizers are added to replace the water and preserve the structure of the molecule. [ 8 ]
Before administration, a lyophilized drug is reconstituted as a liquid before being administered. This is done by combining a liquid diluent with the freeze-dried powder, mixing, then injecting. Reconstitution usually requires a reconstitution and delivery system to ensure that the drug is correctly mixed and administered.
Options for topical formulation include: [ 9 ] | https://en.wikipedia.org/wiki/Pharmaceutical_formulation |
The pharmaceutical industry is a medical industry that discovers, develops, produces, and markets pharmaceutical goods such as medications and medical devices . Medications are then administered to (or self-administered by) patients for curing or preventing disease or for alleviating symptoms of illness or injury. [ 1 ] [ 2 ]
Pharmaceutical companies may deal in generic drugs , branded drugs, or both, in different contexts. Generic materials are without the involvement of intellectual property , whereas branded materials are protected by chemical patents . The industry's various subdivisions include distinct areas, such as manufacturing biologics and total synthesis . The industry is subject to a variety of laws and regulations that govern the patenting , efficacy testing, safety evaluation , and marketing of these drugs. The global pharmaceutical market produced treatments worth a total of $1,228.45 billion in 2020. The sector showed a compound annual growth rate (CAGR) of 1.8% in 2021, including the effects of the COVID-19 pandemic . [ 3 ]
In historical terms, the pharmaceutical industry, as an intellectual concept , arose in the middle to late 1800s in nation-states with developed economies such as Germany, Switzerland, and the United States. Some businesses engaging in synthetic organic chemistry , such as several firms generating dyestuffs derived from coal tar on a large scale, were seeking out new applications for their artificial materials in terms of human health. This trend of increased capital investment occurred in tandem with the scholarly study of pathology as a field advancing significantly, and a variety of businesses set up cooperative relationships with academic laboratories evaluating human injury and disease. Examples of industrial companies with a pharmaceutical focus that have endured to this day after such distant beginnings include Bayer (based out of Germany) and Pfizer (based out of the U.S.). [ 4 ]
The pharmaceutical industry has faced extensive criticism for its marketing practices, including undue influence on physicians through pharmaceutical sales representatives , biased continuing medical education , and disease mongering to expand markets. Pharmaceutical lobbying has made it one of the most powerful influences on health policy , particularly in the United States . There are documented cases of pharmaceutical fraud , including off-label promotion and kickbacks , resulting in multi-billion dollar settlements. Drug pricing continues to be a major issue, with many unable to afford essential prescription drugs . Regulatory agencies like the FDA have been accused of being too lenient due to revolving doors with industry. During the COVID-19 pandemic , major pharmaceutical companies received public funding while retaining intellectual property rights, prompting calls for greater transparency and access.
The modern era of the pharmaceutical industry began with local apothecaries that expanded their traditional role of distributing botanical drugs such as morphine and quinine to wholesale manufacture in the mid-1800s. Intentional drug discovery from plants began with the extraction of morphine – an analgesic and sleep-inducing agent – from opium by the German apothecary assistant Friedrich Sertürner somewhere between 1803 and 1805. Sertürner later named this compound after the Greek god of dreams, Morpheus . Multinational corporations including Merck , Hoffman-La Roche , Burroughs-Wellcome (now part of GSK ), Abbott Laboratories , Eli Lilly , and Upjohn (now part of Pfizer ) began as local apothecary shops in the mid-1800s. By the late 1880s, German dye manufacturers had perfected the purification of individual organic compounds from tar and other mineral sources and had also established rudimentary methods in organic chemical synthesis . [ 4 ] The development of synthetic chemical methods allowed scientists to systematically vary the structure of chemical substances, and growth in the emerging science of pharmacology expanded their ability to evaluate the biological effects of these structural changes. [ citation needed ]
By the 1890s, the profound effect of adrenal extracts on many different tissue types had been discovered, setting off a search both for the mechanism of chemical signaling and efforts to exploit these observations for the development of new drugs. The blood pressure raising and vasoconstrictive effects of adrenal extracts were of particular interest to surgeons as hemostatic agents and as a treatment for shock, and several companies developed products based on adrenal extracts containing varying purities of the active substance. In 1897, John Abel at the Johns Hopkins University identified the active substance as epinephrine , which he isolated in an impure state as the sulfate salt. Industrial chemist Jōkichi Takamine later developed a method for obtaining epinephrine in a pure state and licensed the technology to Parke-Davis . Parke-Davis marketed epinephrine under the trade name Adrenalin . Injected epinephrine proved to be especially efficacious for the acute treatment of asthma attacks, and an inhaled version was sold in the United States until 2011 ( Primatene Mist ). [ 5 ] [ 6 ] By 1929 epinephrine had been formulated into an inhaler for use in the treatment of nasal congestion.
While highly effective, the requirement for injection limited the use of epinephrine [ clarification needed ] and orally active derivatives were sought. A structurally similar compound, ephedrine , was identified by Japanese chemists in the Ma Huang plant and marketed by Eli Lilly as an oral treatment for asthma. Following the work of Henry Dale and George Barger at Burroughs-Wellcome , academic chemist Gordon Alles synthesized amphetamine and tested it in asthma patients in 1929. The drug proved to have only modest anti-asthma effects but produced sensations of exhilaration and palpitations. Amphetamine was developed by Smith, Kline and French as a nasal decongestant under the trade name Benzedrine Inhaler. Amphetamine was eventually developed for the treatment of narcolepsy , post-encephalitic parkinsonism , and mood elevation in depression and other psychiatric indications. It received approval as a New and Nonofficial Remedy from the American Medical Association for these uses in 1937, [ 7 ] and remained in common use for depression until the development of tricyclic antidepressants in the 1960s. [ 6 ]
In 1903, Hermann Emil Fischer and Joseph von Mering disclosed their discovery that diethylbarbituric acid, formed from the reaction of diethylmalonic acid, phosphorus oxychloride and urea, induces sleep in dogs. The discovery was patented and licensed to Bayer pharmaceuticals , which marketed the compound under the trade name Veronal as a sleep aid beginning in 1904. Systematic investigations of the effect of structural changes on potency and duration of action led to the discovery of phenobarbital at Bayer in 1911 and the discovery of its potent anti-epileptic activity in 1912. Phenobarbital was among the most widely used drugs for the treatment of epilepsy through the 1970s, and as of 2014, remains on the World Health Organization's list of essential medications. [ 8 ] [ 9 ]
The 1950s and 1960s saw increased awareness of the addictive properties and abuse potential of barbiturates and amphetamines and led to increasing restrictions on their use and growing government oversight of prescribers. Today, amphetamine is largely restricted to use in the treatment of attention deficit disorder and phenobarbital in the treatment of epilepsy . [ 10 ] [ 11 ]
In 1958, Leo Sternbach discovered the first benzodiazepine , chlordiazepoxide (Librium). Dozens of other benzodiazepines have been developed and are in use, some of the more popular drugs being diazepam (Valium), alprazolam (Xanax), clonazepam (Klonopin), and lorazepam (Ativan). Due to their far superior safety and therapeutic properties, benzodiazepines have largely replaced the use of barbiturates in medicine, except in certain special cases. When it was later discovered that benzodiazepines, like barbiturates, significantly lose their effectiveness and can have serious side effects when taken long-term, Heather Ashton researched benzodiazepine dependence and developed a protocol to discontinue their use. [ citation needed ]
A series of experiments performed from the late 1800s to the early 1900s revealed that diabetes is caused by the absence of a substance normally produced by the pancreas. In 1869, Oskar Minkowski and Joseph von Mering found that diabetes could be induced in dogs by surgical removal of the pancreas. In 1921, Canadian professor Frederick Banting and his student Charles Best repeated this study and found that injections of pancreatic extract reversed the symptoms produced by pancreas removal. Soon, the extract was demonstrated to work in humans, but the development of insulin therapy as a routine medical procedure was delayed by difficulties in producing the material in sufficient quantity and with reproducible purity. The researchers sought assistance from industrial collaborators at Eli Lilly and Co . based on the company's experience with large-scale purification of biological materials. Chemist George B. Walden of Eli Lilly and Company found that careful adjustment of the pH of the extract allowed a relatively pure grade of insulin to be produced. Under pressure from Toronto University and a potential patent challenge by academic scientists who had independently developed a similar purification method, an agreement was reached for the non-exclusive production of insulin by multiple companies. Before the discovery and widespread availability of insulin therapy, the life expectancy of diabetics was only a few months. [ 12 ]
The development of drugs for the treatment of infectious diseases was a major focus of early research and development efforts; in 1900, pneumonia, tuberculosis, and diarrhea were the three leading causes of death in the United States and mortality in the first year of life exceeded 10%. [ 13 ] [ 14 ] [ failed verification ]
In 1911 arsphenamine , the first synthetic anti-infective drug, was developed by Paul Ehrlich and chemist Alfred Bertheim of the Institute of Experimental Therapy in Berlin. The drug was given the commercial name Salvarsan. [ 15 ] Ehrlich, noting both the general toxicity of arsenic and the selective absorption of certain dyes by bacteria, hypothesized that an arsenic-containing dye with similar selective absorption properties could be used to treat bacterial infections. Arsphenamine was prepared as part of a campaign to synthesize a series of such compounds and exhibited partially selective toxicity. Arsphenamine proved to be the first effective treatment for syphilis , a disease that until then had been incurable and led inexorably to severe skin ulceration, neurological damage, and death. [ 16 ]
Ehrlich's approach of systematically varying the chemical structure of synthetic compounds and measuring the effects of these changes on biological activity was pursued broadly by industrial scientists, including Bayer scientists Josef Klarer, Fritz Mietzsch, and Gerhard Domagk . This work, also based on the testing of compounds available from the German dye industry, led to the development of Prontosil , the first representative of the sulfonamide class of antibiotics . Compared to arsphenamine, the sulfonamides had a broader spectrum of activity and were far less toxic, rendering them useful for infections caused by pathogens such as streptococci . [ 17 ] In 1939, Domagk received the Nobel Prize in Medicine for this discovery. [ 18 ] [ 19 ] Nonetheless, the dramatic decrease in deaths from infectious diseases that occurred before World War II was primarily the result of improved public health measures such as clean water and less crowded housing, and the impact of anti-infective drugs and vaccines was significant mainly after World War II. [ 20 ] [ 21 ]
In 1928, Alexander Fleming discovered the antibacterial effects of penicillin , but its exploitation for the treatment of human disease awaited the development of methods for its large-scale production and purification. These were developed by a U.S. and British government-led consortium of pharmaceutical companies during World War II. [ 22 ]
There was early progress toward the development of vaccines throughout this period, primarily in the form of academic and government-funded basic research directed toward the identification of the pathogens responsible for common communicable diseases. In 1885, Louis Pasteur and Pierre Paul Émile Roux created the first rabies vaccine . The first diphtheria vaccines were produced in 1914 from a mixture of diphtheria toxin and antitoxin (produced from the serum of an inoculated animal), but the safety of the inoculation was marginal and it was not widely used. The United States recorded 206,000 cases of diphtheria in 1921, resulting in 15,520 deaths. In 1923, parallel efforts by Gaston Ramon at the Pasteur Institute and Alexander Glenny at the Wellcome Research Laboratories (later part of GlaxoSmithKline ) led to the discovery that a safer vaccine could be produced by treating diphtheria toxin with formaldehyde . [ 23 ] In 1944, Maurice Hilleman of Squibb Pharmaceuticals developed the first vaccine against Japanese Encephalitis . [ 24 ] Hilleman later moved to Merck , where he played a key role in the development of vaccines against measles , mumps , chickenpox , rubella , hepatitis A , hepatitis B , and meningitis .
Prior to the 20th century, drugs were generally produced by small scale manufacturers with little regulatory control over manufacturing or claims of safety and efficacy. To the extent that such laws did exist, enforcement was lax. In the United States, increased regulation of vaccines and other biological drugs was spurred by tetanus outbreaks and deaths caused by the distribution of contaminated smallpox vaccine and diphtheria antitoxin. [ 25 ] The Biologics Control Act of 1902 required that federal government grant premarket approval for every biological drug and for the process and facility producing such drugs. This Act was followed in 1906 by the Pure Food and Drugs Act , which forbade the interstate distribution of adulterated or misbranded foods and drugs. A drug was considered misbranded if it contained alcohol, morphine, opium, cocaine, or any of several other potentially dangerous or addictive drugs, and if its label failed to indicate the quantity or proportion of such drugs. The government's attempts to use the law to prosecute manufacturers for making unsupported claims of efficacy were undercut by a Supreme Court ruling restricting the federal government's enforcement powers to cases of incorrect specification of the drug's ingredients. [ 26 ]
In 1937 over 100 people died after ingesting " Elixir Sulfanilamide " manufactured by S.E. Massengill Company of Tennessee. The product was formulated in diethylene glycol , a highly toxic solvent that is now widely used as antifreeze. [ 27 ] Under the laws extant at that time, prosecution of the manufacturer was possible only under the technicality that the product had been called an "elixir", which implied a solution in ethanol. In response to this episode, the U.S. Congress passed the Federal Food, Drug, and Cosmetic Act of 1938 (FD&C Act), which for the first time required pre-market demonstration of safety before a drug could be sold, and explicitly prohibited false therapeutic claims. [ 28 ]
The aftermath of World War II saw an explosion in the discovery of new classes of antibacterial drugs [ 29 ] including the cephalosporins (developed by Eli Lilly based on the seminal work of Giuseppe Brotzu and Edward Abraham ), [ 30 ] [ 31 ] streptomycin (discovered during a Merck-funded research program in Selman Waksman's laboratory [ 32 ] ), the tetracyclines [ 33 ] (discovered at Lederle Laboratories, now a part of Pfizer ), erythromycin (discovered at Eli Lilly and Co.) [ 34 ] and their extension to an increasingly wide range of bacterial pathogens. Streptomycin, discovered during a Merck-funded research program in Selman Waksman's laboratory at Rutgers in 1943, became the first effective treatment for tuberculosis. At the time of its discovery, sanitoriums for the isolation of tuberculosis-infected people were a ubiquitous feature of cities in developed countries, with 50% dying within 5 years of admission. [ 32 ] [ 35 ]
A Federal Trade Commission report issued in 1958 attempted to quantify the effect of antibiotic development on American public health. The report found that over the period 1946–1955, there was a 42% drop in the incidence of diseases for which antibiotics were effective and only a 20% drop in those for which antibiotics were not effective. The report concluded that "it appears that the use of antibiotics, early diagnosis, and other factors have limited the epidemic spread and thus the number of these diseases which have occurred". The study further examined mortality rates for eight common diseases for which antibiotics offered effective therapy (syphilis, tuberculosis, dysentery, scarlet fever, whooping cough, meningococcal infections, and pneumonia), and found a 56% decline over the same period. [ 36 ] Notable among these was a 75% decline in deaths due to tuberculosis. [ 37 ]
During the years 1940–1955, the rate of decline in the U.S. death rate accelerated from 2% per year to 8% per year, then returned to the historical rate of 2% per year. The dramatic decline in the immediate post-war years has been attributed to the rapid development of new treatments and vaccines for infectious disease that occurred during these years. [ 39 ] [ 21 ]
Vaccine development continued to accelerate, with the most notable achievement of the period being Jonas Salk 's 1954 development of the polio vaccine under the funding of the non-profit National Foundation for Infantile Paralysis . The vaccine process was never patented but was instead given to pharmaceutical companies to manufacture as a low-cost generic . In 1960 Maurice Hilleman of Merck Sharp & Dohme identified the SV40 virus, which was later shown to cause tumors in many mammalian species. It was later determined that SV40 was present as a contaminant in polio vaccine lots that had been administered to 90% of the children in the United States. [ 40 ] [ 41 ] The contamination appears to have originated both in the original cell stock and in monkey tissue used for production. In 2004 the National Cancer Institute announced that it had concluded that SV40 is not associated with cancer in people. [ 42 ]
Other notable new vaccines of the period include those for measles (1962, John Franklin Enders of Children's Medical Center Boston, later refined by Maurice Hilleman at Merck), Rubella (1969, Hilleman, Merck) and mumps (1967, Hilleman, Merck) [ 43 ] The United States incidences of rubella, congenital rubella syndrome, measles, and mumps all fell by >95% in the immediate aftermath of widespread vaccination. [ 44 ] The first 20 years of licensed measles vaccination in the U.S. prevented an estimated 52 million cases of the disease, 17,400 cases of mental retardation , and 5,200 deaths. [ 45 ]
Hypertension is a risk factor for atherosclerosis, [ 46 ] heart failure , [ 47 ] coronary artery disease , [ 48 ] [ 49 ] stroke , [ 50 ] renal disease , [ 51 ] [ 52 ] and peripheral arterial disease , [ 53 ] [ 54 ] and is the most important risk factor for cardiovascular morbidity and mortality , in industrialized countries. [ 55 ] Prior to 1940 approximately 23% of all deaths among persons over age 50 were attributed to hypertension. Severe cases of hypertension were treated by surgery. [ 56 ]
Early developments in the field of treating hypertension included quaternary ammonium ion sympathetic nervous system blocking agents, but these compounds were never widely used due to their severe side effects, because the long-term health consequences of high blood pressure had not yet been established, and because they had to be administered by injection.
In 1952 researchers at CIBA (Gesellschaft für Chemische Industrie in Basel, predecessor to Novartis ) discovered the first orally available vasodilator, hydralazine . [ 57 ] A major shortcoming of hydralazine monotherapy was that it lost its effectiveness over time ( tachyphylaxis ). In the mid-1950s Karl H. Beyer, James M. Sprague, John E. Baer, and Frederick C. Novello of Merck and Co. discovered and developed chlorothiazide , which remains the most widely used antihypertensive drug today. [ 58 ] This development was associated with a substantial decline in the mortality rate among people with hypertension. [ 59 ] The inventors were recognized by a Public Health Lasker Award in 1975 for "the saving of untold thousands of lives and the alleviation of the suffering of millions of victims of hypertension". [ 60 ]
A 2009 Cochrane review concluded that thiazide antihypertensive drugs reduce the risk of death ( RR 0.89), stroke (RR 0.63), coronary heart disease (RR 0.84), and cardiovascular events (RR 0.70) in people with high blood pressure. [ 61 ] In the ensuing years other classes of the antihypertensive drug were developed and found wide acceptance in combination therapy, including loop diuretics (Lasix/ furosemide , Hoechst Pharmaceuticals , 1963), [ 62 ] beta blockers ( ICI Pharmaceuticals , 1964) [ 63 ] ACE inhibitors , and angiotensin receptor blockers . ACE inhibitors reduce the risk of new-onset kidney disease [RR 0.71] and death [RR 0.84] in diabetic patients, irrespective of whether they have hypertension. [ 64 ]
Prior to World War II, birth control was prohibited in many countries, and in the United States even the discussion of contraceptive methods sometimes led to prosecution under Comstock laws . The history of the development of oral contraceptives is thus closely tied to the birth control movement and the efforts of activists Margaret Sanger , Mary Dennett , and Emma Goldman . Based on fundamental research performed by Gregory Pincus and synthetic methods for progesterone developed by Carl Djerassi at Syntex and by Frank Colton at G.D. Searle & Co. , the first oral contraceptive, Enovid , was developed by G.D. Searle & Co. and approved by the FDA in 1960. The original formulation incorporated vastly excessive doses of hormones and caused severe side effects. Nonetheless, by 1962, 1.2 million American women were on the pill, and by 1965 the number had increased to 6.5 million. [ 65 ] [ 66 ] [ 67 ] [ 68 ] The availability of a convenient form of temporary contraceptive led to dramatic changes in social mores including expanding the range of lifestyle options available to women, reducing the reliance of women on men for contraceptive practice, encouraging the delay of marriage, and increasing pre-marital co-habitation. [ 69 ]
In the U.S., a push for revisions of the FD&C Act emerged from Congressional hearings led by Senator Estes Kefauver of Tennessee in 1959. The hearings covered a wide range of policy issues, including advertising abuses, questionable efficacy of drugs, and the need for greater regulation of the industry. While momentum for new legislation temporarily flagged under extended debate, a new tragedy emerged that underscored the need for more comprehensive regulation and provided the driving force for the passage of new laws.
On 12 September 1960, an American licensee, the William S. Merrell Company of Cincinnati, submitted a new drug application for Kevadon ( thalidomide ), a sedative that had been marketed in Europe since 1956. The FDA medical officer in charge of reviewing the compound, Frances Kelsey , believed that the data supporting the safety of thalidomide was incomplete. The firm continued to pressure Kelsey and the FDA to approve the application until November 1961, when the drug was pulled off the German market because of its association with grave congenital abnormalities. Several thousand newborns in Europe and elsewhere suffered the teratogenic effects of thalidomide. Without approval from the FDA, the firm distributed Kevadon to over 1,000 physicians there under the guise of investigational use. Over 20,000 Americans received thalidomide in this "study," including 624 pregnant patients, and about 17 known newborns suffered the effects of the drug. [ citation needed ]
The thalidomide tragedy resurrected Kefauver's bill to enhance drug regulation that had stalled in Congress, and the Kefauver-Harris Amendment became law on 10 October 1962. Manufacturers henceforth had to prove to the FDA that their drugs were effective as well as safe before they could go on the US market. The FDA received authority to regulate the advertising of prescription drugs and to establish good manufacturing practices . The law required that all drugs introduced between 1938 and 1962 had to be effective. A collaborative study by the FDA and the National Academy of Sciences showed that nearly 40 percent of these products were not effective. A similarly comprehensive study of over-the-counter products began ten years later. [ 70 ]
In 1971, Akira Endo , a Japanese biochemist working for the pharmaceutical company Sankyo , identified mevastatin (ML-236B), a molecule produced by the fungus Penicillium citrinum , as an inhibitor of HMG-CoA reductase , a critical enzyme used by the body to produce cholesterol . Animal trials showed very good inhibitory effects as in clinical trials , however a long-term study in dogs found toxic effects at higher doses and as a result, mevastatin was believed to be too toxic for human use. Mevastatin was never marketed, because of its adverse effects of tumors, muscle deterioration, and sometimes death in laboratory dogs.
P. Roy Vagelos , chief scientist and later CEO of Merck & Co , was interested and made several trips to Japan starting in 1975. By 1978, Merck had isolated lovastatin (mevinolin, MK803) from the fungus Aspergillus terreus , first marketed in 1987 as Mevacor. [ 71 ] [ 72 ] [ 73 ]
In April 1994, the results of a Merck-sponsored study, the Scandinavian Simvastatin Survival Study , were announced. Researchers tested simvastatin , later sold by Merck as Zocor, on 4,444 patients with high cholesterol and heart disease. After five years, the study concluded that patients saw a 35% reduction in their cholesterol, and their chances of dying of a heart attack were reduced by 42%. [ 74 ] In 1995, Zocor and Mevacor both made Merck over US$1 billion. Endo was awarded the 2006 Japan Prize , and the Lasker-DeBakey Clinical Medical Research Award in 2008 for his "pioneering research into a new class of molecules" for "lowering cholesterol". [ 75 ] [ 76 ]
Since several decades, biologics have been rising in importance in comparison with small molecule treatments. The biotech subsector, animal health and the Chinese pharmaceutical sector have also grown substantially. On the organisational side, big international pharmaceutical corporations have experienced a substantial decline of their value share. Also, the core generic sector (substitutions for off-patent brands) has been down valued due to competition. [ 77 ]
Torreya estimated the pharmaceutical industry to have a market valuation of US$7.03 trillion by February 2021 from which US$6.1 trillion is the value of the publicly traded companies. Small Molecules modality had 58.2% of the valuation share down from 84.6% in 2003. Biologics was up at 30.5% from 14.5%. The valuation share of Chinese Pharma grew from 2003 to 2021 from 1% to 12% overtaking Switzerland who is now ranked number 3 with 7.7%. The United States had still by far the most valued pharmaceutical industry with 40% of global valuation. [ 78 ] 2023 was a year of layoffs for at least 10,000 people across 129 public biotech firms globally, albeit mostly small firms; this was a significant increase in reductions versus 2022 in part due to worsening global financial conditions and a reduction in investment by "generalist investors". [ 79 ] Private firms also saw a significant reduction in venture capital investment in 2023, continuing a downward trend started in 2021, which also led to a reduction in initial public offerings being floated. [ 79 ]
A 2022 article articulated this notion succinctly by saying "In the business of drug development, deals can be just as important as scientific breakthroughs", typically referred to as pharmaceutical M&A (for mergers and acquisitions). [ 80 ] It highlighted that some of the most impactful of the remedies of the early 21st Century were only made possible through M&A activities, specifically noting Keytruda and Humira . [ 80 ]
Drug discovery is the process by which potential drugs are discovered or designed. In the past, most drugs have been discovered either by isolating the active ingredient from traditional remedies or by serendipitous discovery. Modern biotechnology often focuses on understanding the metabolic pathways related to a disease state or pathogen , and manipulating these pathways using molecular biology or biochemistry . A great deal of early-stage drug discovery has traditionally been carried out by universities and research institutions.
Drug development refers to activities undertaken after a compound is identified as a potential drug in order to establish its suitability as a medication. Objectives of drug development are to determine appropriate formulation and dosing , as well as to establish safety . Research in these areas generally includes a combination of in vitro studies, in vivo studies, and clinical trials . The cost of late stage development has meant it is usually done by the larger pharmaceutical companies. [ 81 ] The pharmaceuticals and biotechnology industry spends more than 15% of its net sales for Research & Development which is in comparison with other industries by far the highest share. [ 82 ]
Often, large multinational corporations exhibit vertical integration , participating in a broad range of drug discovery and development, manufacturing and quality control, marketing, sales, and distribution. Smaller organizations, on the other hand, often focus on a specific aspect such as discovering drug candidates or developing formulations. Often, collaborative agreements between research organizations and large pharmaceutical companies are formed to explore the potential of new drug substances. More recently, multi-nationals are increasingly relying on contract research organizations to manage drug development. [ 83 ]
Drug discovery and development are very expensive; of all compounds investigated for use in humans only a small fraction are eventually approved in most nations by government-appointed medical institutions or boards, who have to approve new drugs before they can be marketed in those countries. In 2010 18 NMEs (New Molecular Entities) were approved and three biologics by the FDA, or 21 in total, which is down from 26 in 2009 and 24 in 2008. On the other hand, there were only 18 approvals in total in 2007 and 22 back in 2006. Since 2001, the Center for Drug Evaluation and Research has averaged 22.9 approvals a year. [ 84 ] This approval comes only after heavy investment in pre-clinical development and clinical trials , as well as a commitment to ongoing safety monitoring . Drugs which fail part-way through this process often incur large costs, while generating no revenue in return. If the cost of these failed drugs is taken into account, the cost of developing a successful new drug ( new chemical entity , or NCE), has been estimated at US$1.3 billion [ 85 ] (not including marketing expenses ). Professors Light and Lexchin reported in 2012, however, that the rate of approval for new drugs has been a relatively stable average rate of 15 to 25 for decades. [ 86 ]
Industry-wide research and investment reached a record $65.3 billion in 2009. [ 87 ] While the cost of research in the U.S. was about $34.2 billion between 1995 and 2010, revenues rose faster (revenues rose by $200.4 billion in that time). [ 86 ]
A study by the consulting firm Bain & Company reported that the cost for discovering, developing and launching (which factored in marketing and other business expenses) a new drug (along with the prospective drugs that fail) rose over a five-year period to nearly $1.7 billion in 2003. [ 88 ] According to Forbes, by 2010 development costs were between $4 billion to $11 billion per drug. [ 89 ]
Some of these estimates also take into account the opportunity cost of investing capital many years before revenues are realized (see Time-value of money ). Because of the very long time needed for the discovery, development, and approval of pharmaceuticals, these costs can accumulate to nearly half the total expense. A direct consequence within the pharmaceutical industry value chain is that major pharmaceutical multinationals tend to increasingly outsource risks related to fundamental research, which somewhat reshapes the industry ecosystem with biotechnology companies playing an increasingly important role, and overall strategies being redefined accordingly. [ 90 ] Some approved drugs, such as those based on re-formulation of an existing active ingredient (also referred to as Line-extensions) are much less expensive to develop.
In the United States, new pharmaceutical products must be approved by the Food and Drug Administration (FDA) as being both safe and effective. This process generally involves the submission of an Investigational New Drug (IND) filing with sufficient pre-clinical data to support proceeding with human trials. Following IND approval, three phases of progressively larger human clinical trials may be conducted. Phase I generally studies toxicity using healthy volunteers. Phase II can include pharmacokinetics and dosing in patients, and Phase III is a very large study of efficacy in the intended patient population. Following the successful completion of Phase III testing, a New Drug Application is submitted to the FDA. The FDA reviews the data and if the product is seen as having a positive benefit-risk assessment, approval to market the product in the US is granted. [ 91 ]
A fourth phase of post-approval surveillance is also often required due to the fact that even the largest clinical trials cannot effectively predict the prevalence of rare side effects. Postmarketing surveillance ensures that after marketing the safety of a drug is monitored closely. In certain instances, its indication may need to be limited to particular patient groups, and in others, the substance is withdrawn from the market completely.
The FDA provides information about approved drugs at the Orange Book site. [ 92 ]
In the UK, the Medicines and Healthcare products Regulatory Agency (MHRA) approves and evaluates drugs for use. Normally an approval in the UK and other European countries comes later than one in the USA. Then it is the National Institute for Health and Care Excellence (NICE), for England and Wales, who decides if and how the National Health Service (NHS) will allow (in the sense of paying for) their use. The British National Formulary is the core guide for pharmacists and clinicians.
In many non-US western countries, a 'fourth hurdle' of cost effectiveness analysis has developed before new technologies can be provided. This focuses on the 'efficacy price tag' (in terms of, for example, the cost per QALY ) of the technologies in question. In England and Wales NICE decides whether and in what circumstances drugs and technologies will be made available by the NHS, whilst similar arrangements exist with the Scottish Medicines Consortium in Scotland, and the Pharmaceutical Benefits Advisory Committee in Australia. A product must pass the threshold for cost-effectiveness if it is to be approved. Treatments must represent 'value for money' and a net benefit to society.
There are special rules for certain rare diseases ("orphan diseases") in several major drug regulatory territories. For example, diseases involving fewer than 200,000 patients in the United States, or larger populations in certain circumstances are subject to the Orphan Drug Act. [ 93 ] Because medical research and development of drugs to treat such diseases is financially disadvantageous, companies that do so are rewarded with tax reductions, fee waivers, and market exclusivity on that drug for a limited time (seven years), regardless of whether the drug is protected by patents.
In 2011, global spending on prescription drugs topped $954 billion, even as growth slowed somewhat in Europe and North America. The United States accounts for more than a third of the global pharmaceutical market, with $340 billion in annual sales followed by the EU and Japan. [ 95 ] Emerging markets such as China, Russia, South Korea and Mexico outpaced that market, growing a huge 81 percent. [ 96 ] [ 97 ]
The top ten best-selling drugs of 2013 totaled $75.6 billion in sales, with the anti-inflammatory drug Humira being the best-selling drug worldwide at $10.7 billion in sales. The second and third best selling were Enbrel and Remicade, respectively. [ 98 ] The top three best-selling drugs in the United States in 2013 were Abilify ($6.3 billion,) Nexium ($6 billion) and Humira ($5.4 billion). [ 99 ] The best-selling drug ever, Lipitor , averaged $13 billion annually and netted $141 billion total over its lifetime before Pfizer's patent expired in November 2011.
IMS Health publishes an analysis of trends expected in the pharmaceutical industry in 2007, including increasing profits in most sectors despite loss of some patents, and new 'blockbuster' drugs on the horizon. [ 100 ]
Depending on a number of considerations, a company may apply for and be granted a patent for the drug, or the process of producing the drug, granting exclusivity rights typically for about 20 years. [ 101 ] However, only after rigorous study and testing, which takes 10 to 15 years on average, will governmental authorities grant permission for the company to market and sell the drug. [ 102 ] Patent protection enables the owner of the patent to recover the costs of research and development through high profit margins for the branded drug. When the patent protection for the drug expires, a generic drug is usually developed and sold by a competing company. The development and approval of generics are less expensive, allowing them to be sold at a lower price. Often the owner of the branded drug will introduce a generic version before the patent expires in order to get a head start in the generic market. [ 103 ] Restructuring has therefore become routine, driven by the patent expiration of products launched during the industry's "golden era" in the 1990s and companies' failure to develop sufficient new blockbuster products to replace lost revenues. [ 104 ]
In the U.S., the value of prescriptions increased over the period of 1995 to 2005 by 3.4 billion annually, a 61 percent increase. Retail sales of prescription drugs jumped 250 percent from $72 billion to $250 billion, while the average price of prescriptions more than doubled from $30 to $68. [ 105 ]
Advertising is common in healthcare journals as well as through more mainstream media routes. In some countries, notably the US, they are allowed to advertise directly to the general public. Pharmaceutical companies generally employ salespeople (often called 'drug reps' or, an older term, 'detail men') to market directly and personally to physicians and other healthcare providers. In some countries, notably the US, pharmaceutical companies also employ lobbyists to influence politicians. Marketing of prescription drugs in the US is regulated by the federal Prescription Drug Marketing Act of 1987 . The pharmaceutical marketing plan incorporates the spending plans, channels, and thoughts which will take the drug association, and its items and administrations, forward in the current scene.
The book Bad Pharma also discusses the influence of drug representatives, how ghostwriters are employed by the drug companies to write papers for academics to publish, how independent the academic journals really are, how the drug companies finance doctors' continuing education, and how patients' groups are often funded by industry. [ 106 ]
Since the 1980s, new methods of marketing prescription drugs to consumers have become important. Direct-to-consumer media advertising was legalised in the FDA Guidance for Industry on Consumer-Directed Broadcast Advertisements.
There have been many controversies surrounding pharmaceutical marketing and influence. There have been accusations and findings of influence on doctors and other health professionals through drug representatives including the constant provision of marketing 'gifts' and biased information to health professionals. [ 107 ] As well as highly prevalent advertising in journals and conferences, funding independent healthcare organizations and health promotion campaigns, being at a time the most lobbied industry in the US, [ 108 ] sponsorship of medical schools or nurse training, sponsorship of continuing educational events, with influence on the curriculum, [ 109 ] and hiring physicians and doctors as paid consultants on medical advisory boards. [ citation needed ]
Some advocacy groups, such as No Free Lunch and AllTrials , have criticized the effect of drug marketing to physicians because they say it biases physicians to prescribe the marketed drugs even when others might be cheaper or better for the patient. [ 110 ]
There have been related accusations of disease mongering [ 111 ] (over-medicalising) to expand the market for medications. An inaugural conference on that subject took place in Australia in 2006. [ 112 ] In 2009, the Government-funded National Prescribing Service launched the "Finding Evidence – Recognising Hype" program, aimed at educating GPs on methods for independent drug analysis. [ 113 ]
Meta-analyses have shown that psychiatric studies sponsored by pharmaceutical companies are several times more likely to report positive results, and if a drug company employee is involved the effect is even larger. [ 114 ] [ 115 ] [ 116 ] Influence has also extended to the training of doctors and nurses in medical schools, which is being fought.
It has been argued that the design of the Diagnostic and Statistical Manual of Mental Disorders and the expansion of the criteria represents an increasing medicalization of human nature, or "disease mongering", driven by drug company influence on psychiatry. [ 117 ] The potential for direct conflict of interest has been raised, partly because roughly half the authors who selected and defined the DSM-IV psychiatric disorders had or previously had financial relationships with the pharmaceutical industry. [ 118 ]
In the US, starting in 2013, under the Physician Financial Transparency Reports (part of the Sunshine Act), the Centers for Medicare & Medicaid Services has to collect information from applicable manufacturers and group purchasing organizations in order to report information about their financial relationships with physicians and hospitals. Data are made public on the Centers for Medicare & Medicaid Services website. The expectation is that the relationship between doctors and the Pharmaceutical industry will become fully transparent. [ 119 ]
In a report conducted by OpenSecrets , there were more than 1,100 lobbyists working in some capacity for the pharmaceutical business in 2017. In the first quarter of 2017, the health products and pharmaceutical industry spent $78 million on lobbying members of the United States Congress. [ 120 ]
The pricing of pharmaceuticals is becoming a major challenge for health systems. [ 121 ] A November 2020 study by the West Health Policy Center stated that more than 1.1 million senior citizens in the U.S. Medicare program is expected to die prematurely over the next decade because they will be unable to afford their prescription medications, requiring an additional $17.7 billion to be spent annually on avoidable medical costs due to health complications. [ 122 ]
Ben Goldacre has argued that regulators – such as the Medicines and Healthcare products Regulatory Agency (MHRA) in the UK, or the Food and Drug Administration (FDA) in the United States – advance the interests of the drug companies rather than the interests of the public due to revolving door exchange of employees between the regulator and the companies and friendships develop between regulator and company employees. [ 123 ] He argues that regulators do not require that new drugs offer an improvement over what is already available, or even that they be particularly effective. [ 123 ]
Others have argued that excessive regulation suppresses therapeutic innovation and that the current cost of regulator-required clinical trials prevents the full exploitation of new genetic and biological knowledge for the treatment of human disease. A 2012 report by the President's Council of Advisors on Science and Technology made several key recommendations to reduce regulatory burdens to new drug development, including 1) expanding the FDA's use of accelerated approval processes, 2) creating an expedited approval pathway for drugs intended for use in narrowly defined populations, and 3) undertaking pilot projects designed to evaluate the feasibility of a new, adaptive drug approval process. [ 124 ]
Pharmaceutical fraud involves deceptions that bring financial gain to a pharmaceutical company. It affects individuals and public and private insurers. There are several different schemes [ 125 ] used to defraud the health care system which are particular to the pharmaceutical industry. These include: Good Manufacturing Practice (GMP) Violations, Off Label Marketing, Best Price Fraud, CME Fraud, Medicaid Price Reporting, and Manufactured Compound Drugs. [ 126 ] Of this amount $2.5 billion was recovered through False Claims Act cases in FY 2010. Examples of fraud cases include the GlaxoSmithKline $3 billion settlement, Pfizer $2.3 billion settlement and Merck & Co. $650 million settlement. Damages from fraud can be recovered by use of the False Claims Act , most commonly under the qui tam provisions which reward an individual for being a " whistleblower ", or relator (law) . [ 127 ]
Every major company selling atypical antipsychotics— Bristol-Myers Squibb , Eli Lilly and Company , Pfizer , AstraZeneca and Johnson & Johnson —has either settled recent government cases, under the False Claims Act, for hundreds of millions of dollars or is currently under investigation for possible health care fraud. Following charges of illegal marketing, two of the settlements set records in 2009 for the largest criminal fines ever imposed on corporations. One involved Eli Lilly's antipsychotic Zyprexa , and the other involved Bextra , an anti-inflammatory medication used for arthritis. In the Bextra case, the government also charged Pfizer with illegally marketing another antipsychotic, Geodon ; Pfizer settled that part of the claim for $301 million, without admitting any wrongdoing. [ 128 ]
In July 2012, GlaxoSmithKline pleaded guilty to criminal charges and agreed to a $3 billion settlement of the largest health-care fraud case in the U.S. and the largest payment by a drug company. [ 129 ] The settlement is related to the company's illegal promotion of prescription drugs, its failure to report safety data, [ 130 ] bribing doctors, and promoting medicines for uses for which they were not licensed. The drugs involved were Paxil , Wellbutrin , Advair , Lamictal , and Zofran for off-label, non-covered uses. Those and the drugs Imitrex , Lotronex , Flovent , and Valtrex were involved in the kickback scheme . [ 131 ] [ 132 ] [ 133 ]
The following is a list of the four largest settlements reached with pharmaceutical companies from 1991 to 2012, rank ordered by the size of the total settlement. Legal claims against the pharmaceutical industry have varied widely over the past two decades, including Medicare and Medicaid fraud , off-label promotion, and inadequate manufacturing practices. [ 134 ] [ 135 ]
In May 2015, the New England Journal of Medicine emphasized the importance of pharmaceutical industry-physician interactions for the development of novel treatments and argued that moral outrage over industry malfeasance had unjustifiably led many to overemphasize the problems created by financial conflicts of interest. The article noted that major healthcare organizations, such as National Center for Advancing Translational Sciences of the National Institutes of Health, the President's Council of Advisors on Science and Technology, the World Economic Forum, the Gates Foundation, the Wellcome Trust, and the Food and Drug Administration had encouraged greater interactions between physicians and industry in order to improve benefits to patients. [ 140 ] [ 141 ]
In November 2020 several pharmaceutical companies announced successful trials of COVID-19 vaccines, with efficacy
of 90 to 95% in preventing infection. Per company announcements and data reviewed by external analysts, these vaccines are priced at $3 to $37 per dose. [ 142 ] The Wall Street Journal ran an editorial calling for this achievement to be recognized with a Nobel Peace Prize. [ 143 ]
Doctors Without Borders warned that high prices and monopolies on medicines, tests, and vaccines would prolong the pandemic and cost lives. They urged governments to prevent profiteering, using compulsory licenses as needed, as had already been done by Canada, Chile, Ecuador, Germany, and Israel. [ 144 ]
On 20 February, 46 US lawmakers called for the US government not to grant monopoly rights when giving out taxpayer development money for any coronavirus vaccines and treatments, to avoid giving exclusive control of prices and availability to private manufacturers. [ 145 ]
In the United States, the government signed agreements in which research and development or the building of manufacturing plants for potential COVID-19 therapeutics was subsidized. Typically, the agreement involved the government taking ownership of a certain number of doses of the product without further payment. For example, under the auspices of Operation Warp Speed in the United States, the government subsidized research related to COVID-19 vaccines and therapeutics at Regeneron, [ 146 ] Johnson and Johnson, Moderna, AstraZeneca, Novavax, Pfizer, and GSK. Typical terms involved research subsidies of $400 million to $2 billion, and included government ownership of the first 100 million doses of any COVID-19 vaccine successfully developed. [ 147 ]
American pharmaceutical company Gilead sought and obtained orphan drug status for remdesivir from the US Food and Drug Administration (FDA) on 23 March 2020. This provision is intended to encourage the development of drugs affecting fewer than 200,000 Americans by granting strengthened and extended legal monopoly rights to the manufacturer, along with waivers on taxes and government fees. [ 148 ] [ 149 ] Remdesivir is a candidate for treating COVID-19; at the time the status was granted, fewer than 200,000 Americans had COVID-19, but numbers were climbing rapidly as the COVID-19 pandemic reached the US, and crossing the threshold soon was considered inevitable. [ 148 ] [ 149 ] Remdesivir was developed by Gilead with over $79 million in U.S. government funding. [ 149 ] In May 2020, Gilead announced that it would provide the first 940,000 doses of remdesivir to the federal government free of charge. [ 150 ] After facing strong public reactions, Gilead gave up the "orphan drug" status for remdesivir on 25 March. [ 151 ] Gilead retains 20-year remdesivir patents in more than 70 countries. [ 144 ] In May 2020, the company further announced that it was in discussions with several generics companies to provide rights to produce remdesivir for developing countries, and with the Medicines Patent Pool to provide broader generic access. [ 152 ]
Patents have been criticized in the developing world, as they are thought [ who? ] to reduce access to existing medicines. [ 153 ] Reconciling patents and universal access to medicine would require an efficient international policy of price discrimination . Moreover, under the TRIPS agreement of the World Trade Organization , countries must allow pharmaceutical products to be patented. In 2001, the WTO adopted the Doha Declaration , which indicates that the TRIPS agreement should be read with the goals of public health in mind, and allows some methods for circumventing pharmaceutical monopolies: via compulsory licensing or parallel imports , even before patent expiration. [ 154 ]
In March 2001, 40 multi-national pharmaceutical companies brought litigation against South Africa for its Medicines Act , which allowed the generic production of antiretroviral drugs (ARVs) for treating HIV, despite the fact that these drugs were on-patent. [ 155 ] HIV was and is an epidemic in South Africa, and ARVs at the time cost between US$10,000 and US$15,000 per patient per year. This was unaffordable for most South African citizens, and so the South African government committed to providing ARVs at prices closer to what people could afford. To do so, they would need to ignore the patents on drugs and produce generics within the country (using a compulsory license), or import them from abroad. After an international protest in favour of public health rights (including the collection of 250,000 signatures by Médecins Sans Frontières ), the governments of several developed countries (including The Netherlands, Germany, France, and later the US) backed the South African government, and the case was dropped in April of that year. [ 156 ]
In 2016, GlaxoSmithKline (the world's sixth largest pharmaceutical company) announced that it would be dropping its patents in poor countries so as to allow independent companies to make and sell versions of its drugs in those areas, thereby widening the public access to them. [ 157 ] GlaxoSmithKline published a list of 50 countries they would no longer hold patents in, affecting one billion people worldwide.
In 2011 four of the top 20 corporate charitable donations and eight of the top 30 corporate charitable donations came from pharmaceutical manufacturers. The bulk of corporate charitable donations (69% as of 2012) comes by way of non-cash charitable donations, the majority of which again were donations contributed by pharmaceutical companies. [ 158 ]
Charitable programs and drug discovery & development efforts by pharmaceutical companies include: | https://en.wikipedia.org/wiki/Pharmaceutical_industry |
Pharmaceutical microbiology is an applied branch of microbiology . It involves the study of microorganisms associated with the manufacture of pharmaceuticals e.g. minimizing the number of microorganisms in a process environment, excluding microorganisms and microbial byproducts like exotoxin and endotoxin from water and other starting materials, and ensuring the finished pharmaceutical product is sterile. [ 1 ] Other aspects of pharmaceutical microbiology include the research and development of anti-infective agents , the use of microorganisms to detect mutagenic and carcinogenic activity in prospective drugs , and the use of microorganisms in the manufacture of pharmaceutical products like insulin and human growth hormone .
Drug safety is a major focus of pharmaceutical microbiology. Pathogenic bacteria, fungi (yeasts and moulds) and toxins produced by microorganisms are all possible contaminants of medicines- although stringent, regulated processes are in place to ensure the risk is minimal.
Another major focus of pharmaceutical microbiology is to determine how a product will react in cases of contamination . For example: You have a bottle of cough medicine . Imagine you take the lid off, pour yourself a dose and forget to replace the lid. You come back to take your next dose and discover that you will indeed left the lid off for a few hours. What happens if a microorganism "fell in" whilst the lid was off?
There are tests that look at that. The product is "challenged" with a known amount of specific microorganisms, such as E. coli and C. albicans and the anti-microbial activity monitored [ 2 ]
Pharmaceutical microbiology is additionally involved with the validation of disinfectants, either according to U.S. AOAC or European CEN standards, to evaluate the efficacy of disinfectants in suspension, on surfaces, and through field trials. Field trials help to establish the frequency of the application of detergents and disinfectants.
Testing of pharmaceutical products is carried out according to a Pharmacopeia of which there are a few types. For example: In America, the United States Pharmacopeia is used; in Japan there is the Japanese Pharmacopeia ; in the United Kingdom there is the British Pharmacopoeia and in Europe the European Pharmacopeia . These contain a test method which is to be followed when testing, along with defined specifications for the amount of microorganisms allowed in a given amount of product.
The specifications change depending on the product type and method in which it is introduced to the body. The pharmacopoeia also covers areas like sterility testing, endotoxin testing, the use of biological indicators, microbial limits testing and enumeration, and the testing of pharmaceutical grade water .
Pharmaceutical microbiologists are required to assess cleanrooms and controlled environments for contamination (viable and particulate) and to introduce contamination control strategies. This includes an understanding of risk assessment. [ 3 ]
Risk management has been successfully employed in various industrial sectors like US Space industry (NASA), nuclear power industry and automobile industry which benefited these industries in several areas. But in application, the pharmaceutical sector is still in its infancy and the utilization of risk assessment techniques to pharmaceutical production is just beginning and the potential gains are yet to be realized.
Cleanrooms and zones are typically classified according to their use (the main activity within each room or zone) and confirmed by the cleanliness of the air by the measurement of particles. Cleanrooms are microbiologically assessed through environmental monitoring methods.
Viable monitoring is designed to detect levels of bacteria and fungi present in defined locations /areas during a particular stage in the activity of processing and filling a product. Viable monitoring is designed to detect mesophilic micro-organisms in the aerobic state. However, some manufacturers may have requirements to examine for other types of microorganisms (such as anaerobes if nitrogen lines are used as part of the manufacturing process). [ 4 ]
Surface methods include testing various Surfaces for numbers of microorganisms, such as:
• Product Contact Surfaces
• Floors
• Walls
• Ceilings
Using techniques like:
• Contact Plates
• Touch Plates
• Swabs
• Surface Rinse Method
For air monitoring, this is undertaken using agar settle plates (placed in the locations of greatest risk) or active (volumetric) air-samplers (to provide a quantitative assessment of the number of microorganisms in the air per volume of air sampled). Active air-samplers generally fall into the following different models:
• Slit to Agar
• Membrane Filtration
• Centrifugal Samplers
Monitoring methods will all use either a general purpose culture medium like tryptone soya agar (TSA), which will be used at a dual incubation regime of 30 °C – 35 °C and 20 °C – 25 °C or two different culture media are used at two different temperatures, of which one of the media is selective for fungi (e.g. Sabouraud Dextrose agar, SDA). The choice of culture media, incubation times and temperatures requires validating.
The main sources of education and professional guidance for pharmaceutical microbiology come from Dr Tim Sandle's Pharmaceutical Microbiology Resources , Dr Scott Sutton's Microbiology Network , and the UK and Ireland Pharmaceutical Microbiology Interest Group ( Pharmig ). | https://en.wikipedia.org/wiki/Pharmaceutical_microbiology |
Pharmaceutics is the discipline of pharmacy that deals with the process of turning a new chemical entity (NCE) or an existing drug into a medication to be used safely and effectively by patients. [ 1 ] The patients could be either humans or animals. Pharmaceutics helps relate the formulation of drugs to their delivery and disposition in the body. [ 2 ] Pharmaceutics deals with the formulation of a pure drug substance into a dosage form .
Pharmaceutics is also called the science of dosage form design. There are many chemicals with pharmacological properties, but need special measures to help them achieve therapeutically relevant amounts at their sites of action. [ 2 ]
Branches of pharmaceutics include:
Pharmaceutics deals with the formulation of a pure drug substance into a dosage form . Pure drug substances are usually white crystalline or amorphous powders. Before the advent of medicine as a science, it was common for pharmacists to dispense drugs as is . Most drugs today are administered as parts of a dosage form. The clinical performance of drugs depends on their form of presentation to the patient. [ 3 ]
Pharmaceutics is a specialization in the field of pharmacy. Typically, Pharm-D graduates can choose to continue studies in this field towards a PhD degree.
This pharmacy -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pharmaceutics |
Pharmacia was a pharmaceutical and biotechnological company in Sweden that merged with the American pharmaceutical company Upjohn in 1995.
Pharmacia company was founded in 1911 in Stockholm , Sweden by pharmacist Gustav Felix Grönfeldt at the Elgen Pharmacy. [ 1 ] [ 2 ] The company was named after the Greek word φαρμακεία, transliterated pharmakeia , which means 'sorcery'. In the company's early days, much of its profits were derived from the "miracle medicine" Phospho-Energon .
During World War II , Swedish chemist Björn Ingelman (who worked for Arne Tiselius at Uppsala university ) researched various uses for the polysaccharide dextran . Together with the medical researcher Anders Grönwall, he discovered that dextran could be used as a replacement for blood plasma in blood transfusions , for which there could be a large need in wartime. Pharmacia, which then was still a small company, was contacted in 1943 and its CEO Elis Göth was very interested. The product Macrodex, a dextran solution, was launched four years later. [ 3 ]
Dextran-based products were to play a significant role in the further expansion of Pharmacia. In 1951, the company moved to Uppsala , Sweden, to get closer to the scientists with whom they cooperated, and Ingelman became its head of research. In 1959, Pharmacia pioneered gel filtration with its Sephadex products. These were also based on dextran and discoveries in Tiselius' department, this time by Jerker Porath and Per Flodin. In 1967 Pharmacia Fine Chemicals was established in Uppsala. In 1986 Pharmacia Fine Chemicals acquired LKB-produkter AB and changed its name to Pharmacia Biotech. Pharmacia Biotech expanded their role in the "biotech revolution" through its acquisition of PL Laboratories from Pabst Brewery offering a line of recombinant DNA specialty research chemicals. Sold to private interests in the 1990s, Pharmacia was first merged with "Kabi Vitrum" to form Kabi Pharmacia with headquarters in Uppsala. In 1993, Kabi Pharmacia bought Farmitalia , an Italian company that had developed doxorubicin , a chemotherapeutic. [ 4 ]
In 1995 the company merged with the American pharmaceutical company Upjohn , becoming known as Pharmacia & Upjohn and moved its headquarters to London .
In 1998, the company was divided into two business area. The pharmaceutical business became Pharmacia & Upjohn. The scientific instruments groups which sold chromatography resin, purification equipment, molecular biology reagents and electrophoresis products was purchased by Amersham in 1998 and was named Amersham Pharmacia Biotech. They later changed the name to Amersham Biosciences and ran their radiochemical and reagents business along with the highly profitable chromatography business. The Pharmacia Logo Drop remained as a highly recognized brand. Amersham Biosciences was sold to GE Healthcare in 2004 to become GE Healthcare Life Sciences. From 1 April 2020, GE Healthcare Life Sciences has been renamed Cytiva , following the sale of GE Healthcare Life Sciences from General Electric to Danaher Corporation in a $21.4 billion acquisition. [ citation needed ]
The following is an illustration of the company's mergers, acquisitions, spin-offs and historical predecessors:
LKB-produkter AB (Acq 1986)
PL Laboratories
Kabi Vitrum (Acq 1990)
Farmitalia (Acq 1993) | https://en.wikipedia.org/wiki/Pharmacia |
Pharmacodynamics ( PD ) is the study of the biochemical and physiologic effects of drugs (especially pharmaceutical drugs ). The effects can include those manifested within animals (including humans), microorganisms , or combinations of organisms (for example, infection ).
Pharmacodynamics and pharmacokinetics are the main branches of pharmacology , being itself a topic of biology interested in the study of the interactions of both endogenous and exogenous chemical substances with living organisms.
In particular, pharmacodynamics is the study of how a drug affects an organism, whereas pharmacokinetics is the study of how the organism affects the drug. Both together influence dosing , benefit, and adverse effects . Pharmacodynamics is sometimes abbreviated as PD and pharmacokinetics as PK, especially in combined reference (for example, when speaking of PK/PD models ).
Pharmacodynamics places particular emphasis on dose–response relationships , that is, the relationships between drug concentration and effect. [ 1 ] One dominant example is drug-receptor interactions as modeled by
where L , R , and LR represent ligand (drug), receptor, and ligand-receptor complex concentrations, respectively. This equation represents a simplified model of reaction dynamics that can be studied mathematically through tools such as free energy maps.
Pharmacodynamics : Study of pharmacological actions on living systems, including the reactions with and binding to cell constituents, and the biochemical and physiological consequences of these actions. [ 2 ]
There are four principal protein targets with which drugs can interact:
NMBD = neuromuscular blocking drugs; NMDA = N-methyl-d-aspartate; EGF = epidermal growth factor. [ 3 ]
The majority of drugs either
There are 7 main drug actions: [ 4 ]
The desired activity of a drug is mainly due to successful targeting of one of the following:
General anesthetics were once thought to work by disordering the neural membranes, thereby altering the Na + influx. Antacids and chelating agents combine chemically in the body. Enzyme-substrate binding is a way to alter the production or metabolism of key endogenous chemicals, for example aspirin irreversibly inhibits the enzyme prostaglandin synthetase (cyclooxygenase) thereby preventing inflammatory response. Colchicine , a drug for gout, interferes with the function of the structural protein tubulin , while digitalis , a drug still used in heart failure, inhibits the activity of the carrier molecule, Na-K-ATPase pump . The widest class of drugs act as ligands that bind to receptors that determine cellular effects. Upon drug binding, receptors can elicit their normal action (agonist), blocked action (antagonist), or even action opposite to normal (inverse agonist).
In principle, a pharmacologist would aim for a target plasma concentration of the drug for a desired level of response. In reality, there are many factors affecting this goal. Pharmacokinetic factors determine peak concentrations, and concentrations cannot be maintained with absolute consistency because of metabolic breakdown and excretory clearance. Genetic factors may exist which would alter metabolism or drug action itself, and a patient's immediate status may also affect indicated dosage.
Undesirable effects of a drug include:
The therapeutic window is the amount of a medication between the amount that gives an effect ( effective dose ) and the amount that gives more adverse effects than desired effects. For instance, medication with a small pharmaceutical window must be administered with care and control, e.g. by frequently measuring blood concentration of the drug, since it easily loses effects or gives adverse effects.
The duration of action of a drug is the length of time that particular drug is effective. [ 5 ] Duration of action is a function of several parameters including plasma half-life , the time to equilibrate between plasma and target compartments, and the off rate of the drug from its biological target . [ 6 ]
In recreational psychoactive drug spaces, duration refers to the length of time over which the subjective effects of a psychoactive substance manifest themselves.
Duration can be broken down into 6 parts: (1) total duration (2) onset (3) come up (4) peak (5) offset and (6) after effects. Depending upon the substance consumed, each of these occurs in a separate and continuous fashion.
The total duration of a substance can be defined as the amount of time it takes for the effects of a substance to completely wear off into sobriety , starting from the moment the substance is first administered .
The onset phase can be defined as the period until the very first changes in perception (i.e. "first alerts") are able to be detected.
The "come up" phase can be defined as the period between the first noticeable changes in perception and the point of highest subjective intensity. This is colloquially known as "coming up."
The peak phase can be defined as period of time in which the intensity of the substance's effects are at its height.
The offset phase can be defined as the amount of time in between the conclusion of the peak and shifting into a sober state. This is colloquially referred to as "coming down."
The after effects can be defined as any residual effects which may remain after the experience has reached its conclusion. After effects depend on the substance and usage. This is colloquially known as a "hangover" for negative after effects of substances, such as alcohol , cocaine , and MDMA or an "afterglow" for describing a typically positive, pleasant effect, typically found in substances such as cannabis , LSD in low to high doses, and ketamine .
The binding of ligands (drug) to receptors is governed by the law of mass action which relates the large-scale status to the rate of numerous molecular processes. The rates of formation and un-formation can be used to determine the equilibrium concentration of bound receptors. The equilibrium dissociation constant is defined by:
where L =ligand, R =receptor, square brackets [] denote concentration. The fraction of bound receptors is
Where p L R {\displaystyle {p}_{LR}} is the fraction of receptor bound by the ligand.
This expression is one way to consider the effect of a drug, in which the response is related to the fraction of bound receptors (see: Hill equation ). The fraction of bound receptors is known as occupancy. The relationship between occupancy and pharmacological response is usually non-linear. This explains the so-called receptor reserve phenomenon i.e. the concentration producing 50% occupancy is typically higher than the concentration producing 50% of maximum response. More precisely, receptor reserve refers to a phenomenon whereby stimulation of only a fraction of the whole receptor population apparently elicits the maximal effect achievable in a particular tissue.
The simplest interpretation of receptor reserve is that it is a model that states there are excess receptors on the cell surface than what is necessary for full effect. Taking a more sophisticated approach, receptor reserve is an integrative measure of the response-inducing capacity of an agonist (in some receptor models it is termed intrinsic efficacy or intrinsic activity ) and of the signal amplification capacity of the corresponding receptor (and its downstream signaling pathways). Thus, the existence (and magnitude) of receptor reserve depends on the agonist ( efficacy ), tissue (signal amplification ability) and measured effect (pathways activated to cause signal amplification). As receptor reserve is very sensitive to agonist's intrinsic efficacy, it is usually defined only for full (high-efficacy) agonists. [ 7 ] [ 8 ] [ 9 ]
Often the response is determined as a function of log[ L ] to consider many orders of magnitude of concentration. However, there is no biological or physical theory that relates effects to the log of concentration. It is just convenient for graphing purposes. It is useful to note that 50% of the receptors are bound when [ L ]= K d .
The graph shown represents the conc-response for two hypothetical receptor agonists, plotted in a semi-log fashion. The curve toward the left represents a higher potency (potency arrow does not indicate direction of increase) since lower concentrations are needed for a given response. The effect increases as a function of concentration.
The concept of pharmacodynamics has been expanded to include Multicellular Pharmacodynamics (MCPD). MCPD is the study of the static and dynamic properties and relationships between a set of drugs and a dynamic and diverse multicellular four-dimensional organization. It is the study of the workings of a drug on a minimal multicellular system (mMCS), both in vivo and in silico . Networked Multicellular Pharmacodynamics (Net-MCPD) further extends the concept of MCPD to model regulatory genomic networks together with signal transduction pathways, as part of a complex of interacting components in the cell. [ 10 ]
Toxicodynamics (TD) and pharmacodynamics (PD) link a therapeutic agent or toxicant, or toxin (xenobiotic)'s dosage to the features, amount, and time course of its biological action. [ 11 ] The mechanism of action is a crucial factor in determining effect and toxicity of the drug, taking in consideration the pharmacokinetic (PK) factors. [ 12 ] The sort and extent of altered cellular physiology will depend on the combination of the drug's presence (as established by pharmacokinetic (PK) studies) and/or its mechanism and duration of action (PD). Types of xenobiotic-target interaction can be described either by reversible, irreversible, noncompetitive, and allosteric interaction or agonist, partial agonist, antagonist, and inverse interactions. In vitro, ex vivo, or in vivo studies can be used to assess PD and TD from the molecule to the level of the entire organism.
The mechanism of drug action and adverse drug reaction is either physiochemical property based and biochemical based. Adverse drugs reactions can be classified as either idiosyncratic (type B) or intrinsic (type A). Idiosyncratic toxicity is not dosage dependent and defy the mass-action relationship. Immune-mediated processes are frequently cited as the source of type B reactions. [ 13 ] These cannot be accurately described in preclinical research or clinical trials due to their low incidence frequency. Type A reactions are dosage (concentration) dependent. Usually, this kind of side effect is an extension of an ongoing treatment.
Pharmacokinetics and pharmacodynamics are termed toxicokinetics and toxicodynamics in the field of ecotoxicology . Here, the focus is on toxic effects on a wide range of organisms. The corresponding models are called toxicokinetic-toxicodynamic models. [ 14 ] | https://en.wikipedia.org/wiki/Pharmacodynamics |
Pharmacoepidemiology is the study of the uses and effects of drugs in well-defined populations. [ 1 ] [ 2 ]
To accomplish this study, pharmacoepidemiology borrows from both pharmacology and epidemiology . Thus, pharmacoepidemiology is the bridge between both pharmacology and epidemiology. Pharmacology is the study of the effect of drugs and clinical pharmacology is the study of effect of drugs on clinical humans. Part of the task of clinical pharmacology is to provide a risk benefit assessment by effects of drugs in patients: [ citation needed ]
Other parameters relating to drug use may benefit epidemiological methodology. Pharmacoepidemiology then can also be defined as the transparent application of epidemiological methods through pharmacological treatment of conditions to better understand the conditions to be treated. [ citation needed ]
Epidemiology is the study of the distribution and determinants of diseases and other health states in populations. Epidemiological studies can be divided into two main types: [ citation needed ]
Pharmacoepidemiology benefits from the methodology developed in general epidemiology and may further develop them for applications of methodology unique to needs of pharmacoepidemiology. There are also some areas that are altogether unique to pharmacoepidemiology, e.g., pharmacovigilance. Pharmacovigilance is a type of continual monitoring of unwanted effects and other safety-related aspects of drugs that are already placed in current growing integrating markets. In practice, pharmacovigilance refers almost exclusively to spontaneous reporting systems which allow health care professionals and others to report adverse drug reactions to the central agency. The central agency combines reports from many sources to produce a more informative profile for drug products than could be done based on reports from fewer health care professionals. [ citation needed ]
In Australia, a 10% sample of all people eligible for government-subsidised medicines by the Pharmaceutical Benefits Scheme (PBS) are made available for research purposes. Licences are held between Services Australia, who hold the data for the PBS, and academics at Monash University, University of New South Wales, University of South Australia and the University of Western Australia to use the 10% sample for research purposes. Research outputs from these data have to be approved by Services Australia prior to publication. These data create a useful picture of all dispensed medicines in Australia and allow for pharmacovigilance and to explore trends in medicines usage. [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] | https://en.wikipedia.org/wiki/Pharmacoepidemiology |
Pharmacogenomics , often abbreviated "PGx," is the study of the role of the genome in drug response. Its name ( pharmaco- + genomics ) reflects its combining of pharmacology and genomics . Pharmacogenomics analyzes how the genetic makeup of a patient affects their response to drugs. [ 1 ] It deals with the influence of acquired and inherited genetic variation on drug response, by correlating DNA mutations (including point mutations , copy number variations , and structural variations ) with pharmacokinetic (drug absorption , distribution , metabolism , and elimination ), pharmacodynamic (effects mediated through a drug's biological targets ), and/or immunogenic endpoints. [ 2 ] [ 3 ] [ 4 ]
Pharmacogenomics aims to develop rational means to optimize drug therapy , with regard to the patients' genotype , to achieve maximum efficiency with minimal adverse effects . [ 5 ] It is hoped that by using pharmacogenomics, pharmaceutical drug treatments can deviate from what is dubbed as the "one-dose-fits-all" approach. Pharmacogenomics also attempts to eliminate trial-and-error in prescribing, allowing physicians to take into consideration their patient's genes, the functionality of these genes, and how this may affect the effectiveness of the patient's current or future treatments (and where applicable, provide an explanation for the failure of past treatments). [ 6 ] [ 7 ] Such approaches promise the advent of precision medicine and even personalized medicine , in which drugs and drug combinations are optimized for narrow subsets of patients or even for each individual's unique genetic makeup. [ 8 ] [ 9 ]
Whether used to explain a patient's response (or lack of it) to a treatment, or to act as a predictive tool, it hopes to achieve better treatment outcomes and greater efficacy, and reduce drug toxicities and adverse drug reactions (ADRs). For patients who do not respond to a treatment, alternative therapies can be prescribed that would best suit their requirements. In order to provide pharmacogenomic recommendations for a given drug, two possible types of input can be used: genotyping , or exome or whole genome sequencing . [ 10 ] Sequencing provides many more data points, including detection of mutations that prematurely terminate the synthesized protein (early stop codon ). [ 10 ]
The term pharmacogenomics is often used interchangeably with pharmacogenetics . Although both terms relate to drug response based on genetic influences, there are differences between the two. Pharmacogenetics is limited to monogenic phenotypes (i.e., single gene-drug interactions). Pharmacogenomics refers to polygenic drug response phenotypes and encompasses transcriptomics , proteomics , and metabolomics .
Pharmacokinetics involves the absorption, distribution, metabolism, and elimination of pharmaceutics. These processes are often facilitated by enzymes such as drug transporters or drug metabolizing enzymes (discussed in-depth below). Variation in DNA loci responsible for producing these enzymes can alter their expression or activity so that their functional status changes. An increase, decrease, or loss of function for transporters or metabolizing enzymes can ultimately alter the amount of medication in the body and at the site of action. This may result in deviation from the medication's therapeutic window and result in either toxicity or loss of effectiveness.
The majority of clinically actionable pharmacogenetic variation occurs in genes that code for drug-metabolizing enzymes, including those involved in both phase I and phase II metabolism. The cytochrome P450 enzyme family is responsible for metabolism of 70-80% of all medications used clinically. [ 11 ] CYP3A4 , CYP2C9 , CYP2C19 , and CYP2D6 are major CYP enzymes involved in drug metabolism and are all known to be highly polymorphic. [ 11 ] Additional drug-metabolizing enzymes that have been implicated in pharmacogenetic interactions include UGT1A1 (a UDP-glucuronosyltransferase ), DPYD , and TPMT . [ 12 ]
Many medications rely on transporters to cross cellular membranes in order to move between body fluid compartments such as the blood, gut lumen, bile, urine, brain, and cerebrospinal fluid. [ 13 ] The major transporters include the solute carrier , ATP-binding cassette , and organic anion transporters . [ 13 ] Transporters that have been shown to influence response to medications include OATP1B1 ( SLCO1B1 ) and breast cancer resistance protein (BCRP) ( ABCG2 ). [ 14 ]
Pharmacodynamics refers to the impact a medication has on the body, or its mechanism of action.
Drug targets are the specific sites where a medication carries out its pharmacological activity. The interaction between the drug and this site results in a modification of the target that may include inhibition or potentiation. [ 15 ] Most of the pharmacogenetic interactions that involve drug targets are within the field of oncology and include targeted therapeutics designed to address somatic mutations (see also Cancer Pharmacogenomics ). For example, EGFR inhibitors like gefitinib (Iressa) or erlotinib (Tarceva) are only indicated in patients carrying specific mutations to EGFR . [ 16 ] [ 17 ]
Germline mutations in drug targets can also influence response to medications, though this is an emerging subfield within pharmacogenomics. One well-established gene-drug interaction involving a germline mutation to a drug target is warfarin (Coumadin) and VKORC1 , which codes for vitamin K epoxide reductase (VKOR) . Warfarin binds to and inhibits VKOR, which is an important enzyme in the vitamin K cycle. [ 18 ] Inhibition of VKOR prevents reduction of vitamin K , which is a cofactor required in the formation of coagulation factors II , VII , IX and X , and inhibitors protein C and S . [ 18 ] [ 19 ]
Medications can have off-target effects (typically unfavorable) that arise from an interaction between the medication and/or its metabolites and a site other than the intended target. [ 20 ] Genetic variation in the off-target sites can influence this interaction. The main example of this type of pharmacogenomic interaction is glucose-6-phosphate-dehydrogenase (G6PD) . G6PD is the enzyme involved in the first step of the pentose phosphate pathway which generates NADPH (from NADP). NADPH is required for the production of reduced glutathione in erythrocytes and it is essential for the function of catalase . [ 21 ] Glutathione and catalase protect cells from oxidative stress that would otherwise result in cell lysis . Certain variants in G6PD result in G6PD deficiency , in which cells are more susceptible to oxidative stress. When medications that have a significant oxidative effect are administered to individuals who are G6PD deficient, they are at an increased risk of erythrocyte lysis that presents as hemolytic anemia . [ 22 ]
The human leukocyte antigen (HLA) system, also referred to as the major histocompatibility complex (MHC), is a complex of genes important for the adaptive immune system . Mutations in the HLA complex have been associated with an increased risk of developing hypersensitivity reactions in response to certain medications. [ 23 ]
The Clinical Pharmacogenetics Implementation Consortium (CPIC) is "an international consortium of individual volunteers and a small dedicated staff who are interested in facilitating use of pharmacogenetic tests for patient care. CPIC’s goal is to address barriers to clinical implementation of pharmacogenetic tests by creating, curating, and posting freely available, peer-reviewed, evidence-based, updatable, and detailed gene/drug clinical practice guidelines. CPIC guidelines follow standardized formats, include systematic grading of evidence and clinical recommendations, use standardized terminology, are peer-reviewed, and are published in a journal (in partnership with Clinical Pharmacology and Therapeutics) with simultaneous posting to cpicpgx.org, where they are regularly updated." [ 12 ]
The CPIC guidelines are "designed to help clinicians understand HOW available genetic test results should be used to optimize drug therapy, rather than WHETHER tests should be ordered. A key assumption underlying the CPIC guidelines is that clinical high-throughput and pre-emptive (pre-prescription) genotyping will become more widespread, and that clinicians will be faced with having patients’ genotypes available even if they have not explicitly ordered a test with a specific drug in mind. CPIC's guidelines, processes and projects have been endorsed by several professional societies." [ 12 ]
In February 2020 the FDA published the Table of Pharmacogenetic Associations. [ 24 ] For the gene-drug pairs included in the table, "the FDA has evaluated and believes there is sufficient scientific evidence to suggest that subgroups of patients with certain genetic variants, or genetic variant-inferred phenotypes (such as affected subgroup in the table below), are likely to have altered drug metabolism, and in certain cases, differential therapeutic effects, including differences in risks of adverse events." [ 25 ]
"The information in this Table is intended primarily for prescribers, and patients should not adjust their medications without consulting their prescriber. This version of the table is limited to pharmacogenetic associations that are related to drug metabolizing enzyme gene variants, drug transporter gene variants, and gene variants that have been related to a predisposition for certain adverse events. The FDA recognizes that various other pharmacogenetic associations exist that are not listed here, and this table will be updated periodically with additional pharmacogenetic associations supported by sufficient scientific evidence." [ 25 ]
The FDA Table of Pharmacogenomic Biomarkers in Drug Labeling lists FDA-approved drugs with pharmacogenomic information found in the drug labeling. "Biomarkers in the table include but are not limited to germline or somatic gene variants (polymorphisms, mutations), functional deficiencies with a genetic etiology, gene expression differences, and chromosomal abnormalities; selected protein biomarkers that are used to select treatments for patients are also included." [ 26 ]
The Pharmacogenomics Knowledgebase (PharmGKB) is an " NIH -funded resource that provides information about how human genetic variation affects response to medications. PharmGKB collects, curates and disseminates knowledge about clinically actionable gene-drug associations and genotype-phenotype relationships." [ 27 ]
There are many commercial laboratories around the world who offer pharmacogenomic testing as a laboratory developed test (LDTs) . The tests offered can vary significantly from one lab to another, including genes and alleles tested for, phenotype assignment, and any clinical annotations provided. With the exception of a few direct-to-consumer tests, all pharmacogenetic testing requires an order from an authorized healthcare professional. In order for the results to be used in a clinical setting in the United States , the laboratory performing the test must be CLIA -certified. Other regulations may vary by country and state.
Direct-to-consumer (DTC) pharmacogenetic tests allow consumers to obtain pharmacogenetic testing without an order from a prescriber. DTC pharmacogenetic tests are generally reviewed by the FDA to determine the validity of test claims. [ 28 ] The FDA maintains a list of DTC genetic tests that have been approved.
There are multiple ways to represent a pharmacogenomic genotype . A commonly used nomenclature system is to report haplotypes using a star (*) allele (e.g., CYP2C19 *1/*2). Single-nucleotide polymorphisms (SNPs) may be described using their assignment reference SNP cluster ID (rsID) or based on the location of the base pair or amino acid impacted. [ 29 ]
In 2017 CPIC published results of an expert survey to standardize terms related to clinical pharmacogenetic test results. [ 30 ] Consensus for terms to describe allele functional status, phenotype for drug metabolizing enzymes, phenotype for drug transporters, and phenotype for high-risk genotype status was reached.
The list below provides a few more commonly known applications of pharmacogenomics: [ 31 ]
Pharmacogenomics may be applied to several areas of medicine, including pain management , cardiology , oncology , and psychiatry . A place may also exist in forensic pathology , in which pharmacogenomics can be used to determine the cause of death in drug-related deaths where no findings emerge using autopsy . [ citation needed ]
In cancer treatment , pharmacogenomics tests are used to identify which patients are most likely to respond to certain cancer drugs . In behavioral health, pharmacogenomic tests provide tools for physicians and care givers to better manage medication selection and side effect amelioration. Pharmacogenomics is also known as companion diagnostics, meaning tests being bundled with drugs. Examples include KRAS test with cetuximab and EGFR test with gefitinib . Beside efficacy, germline pharmacogenetics can help to identify patients likely to undergo severe toxicities when given cytotoxics showing impaired detoxification in relation with genetic polymorphism, such as canonical 5-FU. [ 32 ] In particular, genetic deregulations affecting genes coding for DPD , UGT1A1 , TPMT , CDA and CYP2D6 are now considered as critical issues for patients treated with 5-FU/capecitabine, irinotecan, mercaptopurine/azathioprine, gemcitabine/capecitabine/AraC and tamoxifen, respectively. [ 33 ]
In cardiovascular disorders , the main concern is response to drugs including warfarin , clopidogrel , beta blockers , and statins . [ 10 ] In patients with CYP2C19, who take clopidogrel, cardiovascular risk is elevated, leading to medication package insert updates by regulators. [ 34 ] In patients with type 2 diabetes , haptoglobin (Hp) genotyping shows an effect on cardiovascular disease, with Hp2-2 at higher risk and supplemental vitamin E reducing risk by affecting HDL . [ 35 ]
In psychiatry, as of 2010, research has focused particularly on 5-HTTLPR and DRD2 . [ 36 ]
Initiatives to spur adoption by clinicians include the Ubiquitous Pharmacogenomics (U-PGx) program in Europe and the Clinical Pharmacogenetics Implementation Consortium (CPIC) in the United States. [ 37 ] In a 2017 survey of European clinicians, in the prior year two-thirds had not ordered a pharmacogenetic test. [ 38 ]
In 2010, Vanderbilt University Medical Center launched Pharmacogenomic Resource for Enhanced Decisions in Care and Treatment (PREDICT); [ 39 ] in 2015 survey, two-thirds of the clinicians had ordered a pharmacogenetic test. [ 40 ]
In 2019, the largest private health insurer, UnitedHealthcare , announced that it would pay for genetic testing to predict response to psychiatric drugs. [ 41 ]
In 2020, Canada's 4th largest health and dental insurer, Green Shield Canada , announced that it would pay for pharmacogenetic testing and its associated clinical decision support software to optimize and personalize mental health prescriptions. [ 42 ]
A potential role for pharmacogenomics is to reduce the occurrence of polypharmacy : it is theorized that with tailored drug treatments, patients will not need to take several medications to treat the same condition. Thus they could potentially reduce the occurrence of adverse drug reactions , improve treatment outcomes, and save costs by avoiding purchase of some medications. For example, maybe due to inappropriate prescribing, psychiatric patients tend to receive more medications than age-matched non-psychiatric patients. [ 43 ]
The need for pharmacogenomically tailored drug therapies may be most evident in a survey conducted by the Slone Epidemiology Center at Boston University from February 1998 to April 2007. The study elucidated that an average of 82% of adults in the United States are taking at least one medication (prescription or nonprescription drug, vitamin/mineral, herbal/natural supplement), and 29% are taking five or more. The study suggested that those aged 65 years or older continue to be the biggest consumers of medications, with 17-19% in this age group taking at least ten medications in a given week. Polypharmacy has also shown to have increased since 2000 from 23% to 29%. [ 44 ]
Case A – Antipsychotic adverse reaction [ 45 ]
Patient A has schizophrenia. Their treatment included a combination of ziprasidone, olanzapine, trazodone and benztropine . The patient experienced dizziness and sedation, so they were tapered off ziprasidone and olanzapine, and transitioned to quetiapine. Trazodone was discontinued. The patient then experienced excessive sweating, tachycardia and neck pain, gained considerable weight and had hallucinations. Five months later, quetiapine was tapered and discontinued, with ziprasidone re-introduced into their treatment, due to the excessive weight gain. Although the patient lost the excessive weight they had gained, they then developed muscle stiffness, cogwheeling , tremors and night sweats. When benztropine was added they experienced blurry vision. After an additional five months, the patient was switched from ziprasidone to aripiprazole. Over the course of 8 months, patient A gradually experienced more weight gain and sedation, and developed difficulty with their gait, stiffness, cogwheeling and dyskinetic ocular movements. A pharmacogenomics test later proved the patient had a CYP2D6 *1/*41, which has a predicted phenotype of IM and CYP2C19 *1/*2 with a predicted phenotype of IM as well.
Case B – Pain Management [ 46 ]
Patient B is a woman who gave birth by caesarian section. Her physician prescribed codeine for post-caesarian pain. She took the standard prescribed dose, but she experienced nausea and dizziness while she was taking codeine. She also noticed that her breastfed infant was lethargic and feeding poorly. When the patient mentioned these symptoms to her physician, they recommended that she discontinue codeine use. Within a few days, both the patient's and her infant's symptoms were no longer present. It is assumed that if the patient had undergone a pharmacogenomic test, it would have revealed she may have had a duplication of the gene CYP2D6, placing her in the Ultra-rapid metabolizer (UM) category, explaining her reactions to codeine use.
Case C – FDA Warning on Codeine Overdose for Infants [ 47 ]
On February 20, 2013, the FDA released a statement addressing a serious concern regarding the connection between children who are known as CYP2D6 UM, and fatal reactions to codeine following tonsillectomy and/or adenoidectomy (surgery to remove the tonsils and/or adenoids). They released their strongest Boxed Warning to elucidate the dangers of CYP2D6 UMs consuming codeine. Codeine is converted to morphine by CYP2D6, and those who have UM phenotypes are in danger of producing large amounts of morphine due to the increased function of the gene. The morphine can elevate to life-threatening or fatal amounts, as became evident with the death of three children in August 2012.
Although there appears to be a general acceptance of the basic tenet of pharmacogenomics amongst physicians and healthcare professionals, [ 49 ] several challenges exist that slow the uptake, implementation, and standardization of pharmacogenomics. Some of the concerns raised by physicians include: [ 50 ] [ 49 ] [ 51 ]
Issues surrounding the availability of the test include: [ 48 ]
Although other factors contribute to the slow progression of pharmacogenomics (such as developing guidelines for clinical use), the above factors appear to be the most prevalent. Increasingly substantial evidence and industry body guidelines for clinical use of pharmacogenetics have made it a population wide approach to precision medicine. Cost, reimbursement, education, and easy use at the point of care remain significant barriers to widescale adoption.
There has been call to move away from race and ethnicity in medicine and instead use genetic ancestry as a way to categorize patients. [ 52 ] Some alleles that vary in frequency between specific populations have been shown to be associated with differential responses to specific drugs . As a result, some disease-specific guidelines only recommend pharmacogenetic testing for populations where high-risk alleles are more common [ 53 ] and, similarly, certain insurance companies will only pay for pharmacogenetic testing for beneficiaries of high-risk populations. [ 54 ]
In the early 2000s, handling genetic information as exceptional, including legal or regulatory protections, garnered strong support. It was argued that genomic information may need special policy and practice protections within the context of electronic health records (EHRs). [ 55 ] In 2008, the Genetic Information Nondiscrimination Act (GINA) was enacted to protect patients from health insurance companies discriminating against an individual based on genetic information. [ 56 ] [ 57 ]
More recently it has been argued that genetic exceptionalism is past its expiration date as we move into a blended genomic/big data era of medicine, yet exceptionalism practices continue to permeate clinical healthcare today. [ 58 ] [ 59 ] Garrison et al. recently relayed a call to action to update verbiage from genetic exceptionalism to genomic contextualism in that we recognize a fundamental duality of genetic information. [ 60 ] This allows room in the argument for different types of genetic information to be handled differently while acknowledging that genomic information is similar and yet distinct from other health-related information. [ 60 ] Genomic contextualism would allow for a case-by-case analysis of the technology and the context of its use (e.g., clinical practice, research, secondary findings).
Others argue that genetic information is indeed distinct from other health-related information but not to the extent of requiring legal/regulatory protections, similar to other sensitive health-related data such as HIV status. [ 61 ] Additionally, Evans et al. argue that the EHR has sufficient privacy standards to hold other sensitive information such as social security numbers and that the fundamental nature of an EHR is to house highly personal information. [ 58 ] Similarly, a systematic review reported that the public had concern over privacy of genetic information, with 60% agreeing that maintaining privacy was not possible; however, 96% agreed that a direct-to-consumer testing company had protected their privacy, with 74% saying their information would be similarly or better protected in an EHR. With increasing technological capabilities in EHRs, it is possible to mask or hide genetic data from subsets of providers and there is not consensus on how, when, or from whom genetic information should be masked. [ 55 ] [ 62 ] Rigorous protection and masking of genetic information is argued to impede further scientific progress and clinical translation into routine clinical practices. [ 63 ]
Pharmacogenomics was first recognized by Pythagoras around 510 BC when he made a connection between the dangers of fava bean ingestion with hemolytic anemia and oxidative stress . In the 1950s , this identification was validated and attributed to deficiency of G6PD and is called favism . [ 64 ] [ 65 ] Although the first official publication was not until 1961, [ 66 ] the unofficial beginnings of this science were around the 1950s. Reports of prolonged paralysis and fatal reactions linked to genetic variants in patients who lacked butyrylcholinesterase ('pseudocholinesterase') following succinylcholine injection during anesthesia were first reported in 1956. [ 2 ] [ 67 ] The term pharmacogenetics was first coined in 1959 by Friedrich Vogel of Heidelberg , Germany (although some papers suggest it was 1957 or 1958). [ 68 ] In the late 1960s, twin studies supported the inference of genetic involvement in drug metabolism, with identical twins sharing remarkable similarities in drug response compared to fraternal twins. [ 69 ] The term pharmacogenomics first began appearing around the 1990s. [ 64 ]
The first FDA approval of a pharmacogenetic test was in 2005 [ 9 ] (for alleles in CYP2D6 and CYP2C19 )
Computational advances have enabled cheaper and faster sequencing. [ 70 ] Research has focused on combinatorial chemistry , [ 71 ] genomic mining, omic technologies, and high throughput screening .
As the cost per genetic test decreases, the development of personalized drug therapies will increase. [ 72 ] Technology now allows for genetic analysis of hundreds of target genes involved in medication metabolism and response in less than 24 hours for under $1,000. This a huge step towards bringing pharmacogenetic technology into everyday medical decisions. Likewise, companies like deCODE genetics , MD Labs Pharmacogenetics, Navigenics and 23andMe offer genome scans. The companies use the same genotyping chips that are used in GWAS studies and provide customers with a write-up of individual risk for various traits and diseases and testing for 500,000 known SNPs. Costs range from $995 to $2500 and include updates with new data from studies as they become available. The more expensive packages even included a telephone session with a genetics counselor to discuss the results. [ 73 ]
Pharmacogenetics has become a controversial issue in the area of bioethics . Privacy and confidentiality are major concerns. [ 74 ] The evidence of benefit or risk from a genetic test may only be suggestive, which could cause dilemmas for providers. [ 74 ] : 145 Drug development may be affected, with rare genetic variants possibly receiving less research. [ 74 ] Access and patient autonomy are also open to discussion. [ 75 ] : 680
Journals: | https://en.wikipedia.org/wiki/Pharmacogenomics |
Drug discovery and development requires the integration of multiple scientific and technological disciplines. These include chemistry , biology , pharmacology , pharmaceutical technology and extensive use of information technology . The latter is increasingly recognised as Pharmacoinformatics . Pharmacoinformatics relates to the broader field of bioinformatics .
The main idea behind the field is to integrate different informatics branches (e.g. bioinformatics, chemoinformatics, immunoinformatics, etc.) into a single platform, resulting in a seamless process of drug discovery. The first reference of the term "Pharmacoinformatics" can be found in the year of 1993. [ 1 ]
The first dedicated department for Pharmacoinformatics was established at the National Institute Of Pharmaceutical Education And Research , S.A.S. Nagar, India in 2003. [ 2 ] This has been followed by different universities worldwide including a program by European universities named the European Pharmacoinformatics Initiative (Europin [ 3 ] ).
Pharmacoinformatics is also referred to as pharmacy informatics. According to the article "Pharmacy Informatics: What You Need to Know Now" by the University of Illinois at Chicago Pharmacoinformatics may be defined as: “the scientific field that focuses on medication-related data and knowledge within the continuum of healthcare systems. [ 4 ] ” It is the application of computers to the storage, retrieval and analysis of drug and prescription information. Pharmacy informaticists work with pharmacy information management systems that help the pharmacist safe decisions about patient drug therapies with respect to, medical insurance records, drug interactions, as well as prescription and patient information.
Pharmacy informatics can be thought of as a sub-domain of the larger professional discipline of health informatics. Health informatics is the study of interactions between people, their work processes and engineered systems within health care with a focus on pharmaceutical care and improved patient safety. For example, the Health Information Management Systems Society (HIMSS) defines pharmacy informatics as, "the scientific field that focuses on medication-related data and knowledge within the continuum of healthcare systems - including its acquisition, storage, analysis, use and dissemination - in the delivery of optimal medication-related patient care and health outcomes" | https://en.wikipedia.org/wiki/Pharmacoinformatics |
Pharmacokinetics (from Ancient Greek pharmakon "drug" and kinetikos "moving, putting in motion"; see chemical kinetics ), sometimes abbreviated as PK , is a branch of pharmacology dedicated to describing how the body affects a specific substance after administration. [ 1 ] The substances of interest include any chemical xenobiotic such as pharmaceutical drugs , pesticides , food additives , cosmetics , etc. It attempts to analyze chemical metabolism and to discover the fate of a chemical from the moment that it is administered up to the point at which it is completely eliminated from the body . Pharmacokinetics is based on mathematical modeling that places great emphasis on the relationship between drug plasma concentration and the time elapsed since the drug's administration. Pharmacokinetics is the study of how an organism affects the drug, whereas pharmacodynamics (PD) is the study of how the drug affects the organism. Both together influence dosing , benefit, and adverse effects , as seen in PK/PD models .
Pharmacokinetics :
A number of phases occur once the drug enters into contact with the organism, these are described using the acronym ADME (or LADME if liberation is included as a separate step from absorption):
Some textbooks combine the first two phases as the drug is often administered in an active form, which means that there is no liberation phase. Others include a phase that combines distribution, metabolism and excretion into a disposition phase. Other authors include the drug's toxicological aspect in what is known as ADME-Tox or ADMET . The two phases of metabolism and excretion can be grouped together under the title elimination .
The study of these distinct phases involves the use and manipulation of basic concepts in order to understand the process dynamics. For this reason, in order to fully comprehend the kinetics of a drug it is necessary to have detailed knowledge of a number of factors such as: the properties of the substances that act as excipients , the characteristics of the appropriate biological membranes and the way that substances can cross them, or the characteristics of the enzyme reactions that inactivate the drug.
The following are the most commonly measured pharmacokinetic metrics: [ 5 ] The units of the dose in the table are expressed in moles (mol) and molar (M). To express the metrics of the table in units of mass, instead of Amount of substance , simply replace 'mol' with 'g' and 'M' with 'g/L'. Similarly, other units in the table may be expressed in units of an equivalent dimension by scaling. [ 6 ]
where C av , ss = A U C τ , ss τ {\displaystyle C_{{\text{av}},{\text{ss}}}={\frac {AUC_{\tau ,{\text{ss}}}}{\tau }}}
In pharmacokinetics, steady state refers to the situation where the overall intake of a drug is fairly in dynamic equilibrium with its elimination. In practice, it is generally considered that once regular dosing of a drug is started, steady state is reached after 3 to 5 times its half-life. In steady state and in linear pharmacokinetics, AUC τ =AUC ∞ . [ 8 ]
Models have been developed to simplify conceptualization of the many processes that take place in the interaction between an organism and a chemical substance. Pharmacokinetic modelling may be performed either by noncompartmental or compartmental methods. Multi-compartment models provide the best approximations to reality; however, the complexity involved in adding parameters with that modelling approach means that monocompartmental models and above all two compartmental models are the most-frequently used. The model outputs for a drug can be used in industry (for example, in calculating bioequivalence when designing generic drugs) or in the clinical application of pharmacokinetic concepts. Clinical pharmacokinetics provides many performance guidelines for effective and efficient use of drugs for human-health professionals and in veterinary medicine .
Models generally take the form of mathematical formulas that have a corresponding graphical representation . The use of these models allows an understanding of the characteristics of a molecule , as well as how a particular drug will behave given information regarding some of its basic characteristics such as its acid dissociation constant (pKa), bioavailability and solubility , absorption capacity and distribution in the organism. A variety of analysis techniques may be used to develop models, such as nonlinear regression or curve stripping.
Noncompartmental methods estimate PK parameters directly from a table of concentration-time measurements. Noncompartmental methods are versatile in that they do not assume any specific model and generally produce accurate results acceptable for bioequivalence studies. Total drug exposure is most often estimated by area under the curve (AUC) methods, with the trapezoidal rule ( numerical integration ) the most common method. Due to the dependence on the length of x in the trapezoidal rule, the area estimation is highly dependent on the blood/plasma sampling schedule. That is, the closer time points are, the closer the trapezoids reflect the actual shape of the concentration-time curve. The number of time points available in order to perform a successful NCA analysis should be enough to cover the absorption, distribution and elimination phase to accurately characterize the drug. Beyond AUC exposure measures, parameters such as Cmax (maximum concentration), Tmax (time to maximum concentration), CL and Vd can also be reported using NCA methods.
Compartment models methods estimate the concentration-time graph by modeling it as a system of differential equations. These models are based on a consideration of an organism as a number of related compartments . Both single compartment and multi-compartment models are in use. PK compartmental models are often similar to kinetic models used in other scientific disciplines such as chemical kinetics and thermodynamics . The advantage of compartmental over noncompartmental analysis is the ability to modify parameters and to extrapolate to novel situations. The disadvantage is the difficulty in developing and validating the proper model. Although compartment models have the potential to realistically model the situation within an organism, models inevitably make simplifying assumptions and will not be applicable in all situations. However complicated and precise a model may be, it still does not truly represent reality despite the effort involved in obtaining various distribution values for a drug. This is because the concept of distribution volume is a relative concept that is not a true reflection of reality. The choice of model therefore comes down to deciding which one offers the lowest margin of error for the drug involved.
The simplest PK compartmental model is the one-compartmental PK model. This models an organism as one homogenous compartment. This monocompartmental model presupposes that blood plasma concentrations of the drug are the only information needed to determine the drug's concentration in other fluids and tissues. For example, the concentration in other areas may be approximately related by known, constant factors to the blood plasma concentration.
In this one-compartment model, the most common model of elimination is first order kinetics , where the elimination of the drug is directly proportional to the drug's concentration in the organism. This is often called linear pharmacokinetics , as the change in concentration over time can be expressed as a linear differential equation d C d t = − k el C {\textstyle {\frac {dC}{dt}}=-k_{\text{el}}C} . Assuming a single IV bolus dose resulting in a concentration C initial {\displaystyle C_{\text{initial}}} at time t = 0 {\displaystyle t=0} , the equation can be solved to give C = C initial × e − k el × t {\displaystyle C=C_{\text{initial}}\times e^{-k_{\text{el}}\times t}} .
Not all body tissues have the same blood supply , so the distribution of the drug will be slower in these tissues than in others with a better blood supply. In addition, there are some tissues (such as the brain tissue) that present a real barrier to the distribution of drugs, that can be breached with greater or lesser ease depending on the drug's characteristics. If these relative conditions for the different tissue types are considered along with the rate of elimination, the organism can be considered to be acting like two compartments: one that we can call the central compartment that has a more rapid distribution, comprising organs and systems with a well-developed blood supply; and a peripheral compartment made up of organs with a lower blood flow. Other tissues, such as the brain, can occupy a variable position depending on a drug's ability to cross the barrier that separates the organ from the blood supply.
Two-compartment models vary depending on which compartment elimination occurs in. The most common situation is that elimination occurs in the central compartment as the liver and kidneys are organs with a good blood supply. However, in some situations it may be that elimination occurs in the peripheral compartment or even in both. This can mean that there are three possible variations in the two compartment model, which still do not cover all possibilities. [ 9 ]
In the real world, each tissue will have its own distribution characteristics and none of them will be strictly linear. The two-compartment model may not be applicable in situations where some of the enzymes responsible for metabolizing the drug become saturated, or where an active elimination mechanism is present that is independent of the drug's plasma concentration. If we label the drug's volume of distribution within the organism Vd F and its volume of distribution in a tissue Vd T the former will be described by an equation that takes into account all the tissues that act in different ways, that is:
This represents the multi-compartment model with a number of curves that express complicated equations in order to obtain an overall curve. A number of computer programs have been developed to plot these equations. [ 9 ] The most complex PK models (called PBPK models) rely on the use of physiological information to ease development and validation.
The graph for the non-linear relationship between the various factors is represented by a curve ; the relationships between the factors can then be found by calculating the dimensions of different areas under the curve. The models used in non-linear pharmacokinetics are largely based on Michaelis–Menten kinetics . A reaction's factors of non-linearity include the following:
It can therefore be seen that non-linearity can occur because of reasons that affect the entire pharmacokinetic sequence: absorption, distribution, metabolism and elimination.
At a practical level, a drug's bioavailability can be defined as the proportion of the drug that reaches the systemic circulation. From this perspective the intravenous administration of a drug provides the greatest possible bioavailability, and this method is considered to yield a bioavailability of 1 (or 100%). Bioavailability of other delivery methods is compared with that of intravenous injection (absolute bioavailability) or to a standard value related to other delivery methods in a particular study (relative bioavailability).
Once a drug's bioavailability has been established it is possible to calculate the changes that need to be made to its dosage in order to reach the required blood plasma levels. Bioavailability is, therefore, a mathematical factor for each individual drug that influences the administered dose. It is possible to calculate the amount of a drug in the blood plasma that has a real potential to bring about its effect using the formula:
where De is the effective dose , B bioavailability and Da the administered dose.
Therefore, if a drug has a bioavailability of 0.8 (or 80%) and it is administered in a dose of 100 mg, the equation will demonstrate the following:
That is the 100 mg administered represents a blood plasma concentration of 80 mg that has the capacity to have a pharmaceutical effect.
This concept depends on a series of factors inherent to each drug, such as: [ 12 ]
These concepts, which are discussed in detail in their respective titled articles, can be mathematically quantified and integrated to obtain an overall mathematical equation:
where Q is the drug's purity. [ 12 ]
where V a {\displaystyle Va} is the drug's rate of administration and τ {\displaystyle \tau } is the rate at which the absorbed drug reaches the circulatory system.
Finally, using the Henderson-Hasselbalch equation , and knowing the drug's p K a {\displaystyle pKa\,} ( pH at which there is an equilibrium between its ionized and non-ionized molecules), it is possible to calculate the non-ionized concentration of the drug and therefore the concentration that will be subject to absorption:
When two drugs have the same bioavailability, they are said to be biological equivalents or bioequivalents. This concept of bioequivalence is important because it is currently used as a yardstick in the authorization of generic drugs in many countries.
Bioanalytical methods are necessary to construct a concentration-time profile. Chemical techniques are employed to measure the concentration of drugs in biological matrix , most often plasma. Proper bioanalytical methods should be selective and sensitive. For example, microscale thermophoresis can be used to quantify how the biological matrix/liquid affects the affinity of a drug to its target. [ 13 ] [ 14 ]
Pharmacokinetics is often studied using mass spectrometry because of the complex nature of the matrix (often plasma or urine) and the need for high sensitivity to observe concentrations after a low dose and a long time period. The most common instrumentation used in this application is LC-MS with a triple quadrupole mass spectrometer . Tandem mass spectrometry is usually employed for added specificity. Standard curves and internal standards are used for quantitation of usually a single pharmaceutical in the samples. The samples represent different time points as a pharmaceutical is administered and then metabolized or cleared from the body. Blank samples taken before administration are important in determining background and ensuring data integrity with such complex sample matrices. Much attention is paid to the linearity of the standard curve; however it is common to use curve fitting with more complex functions such as quadratics since the response of most mass spectrometers is not linear across large concentration ranges. [ 15 ] [ 16 ] [ 17 ]
There is currently considerable interest in the use of very high sensitivity mass spectrometry for microdosing studies, which are seen as a promising alternative to animal experimentation . [ 18 ] Recent studies show that Secondary electrospray ionization (SESI-MS) can be used in drug monitoring, presenting the advantage of avoiding animal sacrifice. [ 19 ]
Population pharmacokinetics is the study of the sources and correlates of variability in drug concentrations among individuals who are the target patient population receiving clinically relevant doses of a drug of interest. [ 20 ] [ 21 ] [ 22 ] Certain patient demographic, pathophysiological, and therapeutical features, such as body weight, excretory and metabolic functions, and the presence of other therapies, can regularly alter dose-concentration relationships and can explain variability in exposures. For example, steady-state concentrations of drugs eliminated mostly by the kidney are usually greater in patients with kidney failure than they are in patients with normal kidney function receiving the same drug dosage. Population pharmacokinetics seeks to identify the measurable pathophysiologic factors and explain sources of variability that cause changes in the dose-concentration relationship and the extent of these changes so that, if such changes are associated with clinically relevant and significant shifts in exposures that impact the therapeutic index, dosage can be appropriately modified.
An advantage of population pharmacokinetic modelling is its ability to analyse sparse data sets (sometimes only one concentration measurement per patient is available).
medication
medication
medication
medication
medication
medication
(HIV) medication
Clinical pharmacokinetics (arising from the clinical use of population pharmacokinetics) is the direct application to a therapeutic situation of knowledge regarding a drug's pharmacokinetics and the characteristics of a population that a patient belongs to (or can be ascribed to).
An example is the relaunch of the use of ciclosporin as an immunosuppressor to facilitate organ transplant. The drug's therapeutic properties were initially demonstrated, but it was almost never used after it was found to cause nephrotoxicity in a number of patients. [ 23 ] However, it was then realized that it was possible to individualize a patient's dose of ciclosporin by analysing the patients plasmatic concentrations (pharmacokinetic monitoring). This practice has allowed this drug to be used again and has facilitated a great number of organ transplants.
Clinical monitoring is usually carried out by determination of plasma concentrations as this data is usually the easiest to obtain and the most reliable. The main reasons for determining a drug's plasma concentration include: [ 24 ]
Ecotoxicology is the branch of science that deals with the nature, effects, and interactions of substances that are harmful to the environment such as microplastics and other biosphere harmful substances. [ 25 ] [ 26 ] Ecotoxicology is studied in pharmacokinetics due to the substances responsible for harming the environment such as pesticides can get into the bodies of living organisms. The health effects of these chemicals is thus subject to research and safety trials by government or international agencies such as the EPA or WHO . [ 27 ] [ 28 ] How long these chemicals stay in the body , the lethal dose and other factors are the main focus of Ecotoxicology. | https://en.wikipedia.org/wiki/Pharmacokinetics |
Pharmacokinetics simulation is a simulation method used in determining the safety levels of a drug during its development .
Pharmacokinetics simulation gives an insight to drug efficacy and safety before exposure of individuals to the new drug that might help to improve the design of a clinical trial .
Pharmacokinetics simulations help in addition in therapy planning, to stay within the therapeutic range under various physiological and pathophysiological conditions, e.g., chronic kidney disease .
Simcyp Simulator and GastroPlus (from Simulations Plus ) are simulators that take account for individual variabilities.
PharmaCalc v02 and PharmaCalcCL allow to simulate individual plasma-concentration time curves based on (published) pharmacokinetic parameters such as half-life, volume of distribution etc.
This computational chemistry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pharmacokinetics_simulation |
Pharmacological cardiotoxicity is defined as cardiac damage that occurs under the action of a drug . This can occur both through damage of cardiac muscle as well as through alteration of the ion currents of cardiomyocytes . [ 1 ]
Two distinct drug classes in which cardiotoxicity can occur are in anti-cancer and antiarrhythmic drugs . Anti-cancer drug classes that cause cardiotoxicity include anthracyclines , monoclonal antibodies , and antimetabolites . This form generally manifests as a progressive form of heart failure , but can also manifest as an harmful arrhythmia . [ 2 ] In contrast, in antiarrhythmic drugs , cardiotoxicity is due to a risk of arrhythmias resulting from treated-induced ion current imbalance. [ 3 ]
Other types of drugs are also known for cardiotoxicity, such as clozapine being associated with myocarditis. [ 4 ]
The cardiotoxicity of anticancer drugs has been well documented, with an entire sub-speciality of cardio-oncology dedicated towards investigating and treating these serious side effects. Two well known anticancer drug families that cause cardiotoxicity are anthracyclines and monoclonal antibodies targeting HER2. Other types of anticancer drugs that can lead to cardiotoxicity include alkylating agents such as cyclophosphamide , BCR-ABL1 targeting receptor tyrosine kinases such as imatinib , and VEGF antibodies such as bevicizumab . [ 5 ] This section of the article will focus on anthracyclines and HER2 monoclonal antibodies due to the prominence of cardiotoxicity in these compounds.
The mechanism of anthracycline-induced cardiotoxicity is unknown and is under active research. However, multiple theories exist. One well supported mechanism is related to the production of superoxide anion radicals that in turn damage cardiac myocytes. [ 6 ] Recent research suggests that Top2b (topoisomerase-IIβ) helps mediate the production of oxygen radicals, representing a potential biomarker for this serious side effect. [ 7 ] Other proposed mechanisms include interference with cardiac ATP production, mitochondria-related stress, and lipid peroxidation. [ 6 ]
On the other hand, the mechanism of HER2 antibody cardiotoxicity is more well known. [ 8 ] HER2 is a protein expressed on the cell membranes of HER2 positive breast cancer cells. However, HER2 is also expressed on the surface of cardiac myocytes. It is hypothesized that HER2 expressed in these cardiac cells have a cardioprotective mechanism, and the targeting of these proteins in this context leads to the cardiotoxicity associated with HER2 monoclonal antibodies. [ 9 ]
The cardiotoxicity of anthracyclines can be classified into three categories: early, early onset chronic, and late onset chronic. Early cardiotoxicity is rare, but manifests as arrthymias, myocarditis, and pericarditis. This type of toxicity occurs directly after treatment with anthracycline. Early onset chronic cardiotoxicity is defined as cardiotoxicity manifesting within one year of the completion of treatment, while late onset chronic cardiotoxicity occurs after one year. [ 10 ] The cardiotoxicity of anthracyclines is dose dependent. At total exposure levels lower than 400 mg/m2, the incidence of heart failure is between 3%-5%. At a exposure rate of 700 mg/m2, the heart failure rate is at 48%. [ 11 ]
Cardiotoxicity involving HER2 monoclonal antibodies manifests as decrease left ventricular ejection fraction and resulting heart failure. [ 12 ] The cardiotoxicity of HER2 monoclonal antibodies is dose independent. [ 13 ]
The immediate intervention for the development of cardiotoxicity is discontinuation of the drug. Preventative measures for anthracycline induced cardiomyopathy include dexrazoxane , which is the only preventative drug approved by the FDA for prevention of anthracycline cardiomyopathy. [ 14 ] Overall, there are no specific treatments targeted towards the cardiotoxicity of anticancer drugs. Rather, treatment is of the resultant heart failure. This often takes the form o f ACE inhibitors or beta blockers . [ 15 ]
Antiarrhythmics are broad class of drugs that are used treat heart rhythm irregularities. [ 16 ] Utilizing the Vaughan-Williams (VW) system, antiarrhymic drugs are classified into four main classes based on their mechanism of action. Class I antiarrhymics lead to blockage of sodium channels. Class II antiarrhymatics are beta-adrenoceptor blockers . Class III antiarrhymics act as potassium channel blockers , while Class IV antiarrhymics are non-dihydropyridine calcium channel blockers . While the effects of these drugs may be antiarrhymic, they can also be proarrhymic in other contexts.
The pharmacological cardiotoxicity of antiarrhymic compounds is related to their electrophysiological mechanism. In particular, because antiarrhymics drugs act on the opening/closing of ion channels, the modification of the electrical currents can lead to adverse cardiac events such torsade de pointes or ventricular fibrillation . Due to the case-by-case basis in which these medication lead to cardiotoxicity and the development of specific adverse rhythms, it has become increasingly important to assess compounds in a preclinical environment (See Pharmacological cardiotoxicity#In Silico Cardiotoxicity Assessment ).
The manifestation of antiarrhymic cardiotoxicity may manifest as worsening of the pre-existent arrhythmia or the development of a new arrhythmia.
Female sex at birth has been associated with an increased risk of the development of new arrhythmia, and other risk factors include age, kidney disease, drug-drug interactions, and other underlying heart problems. [ 17 ]
Like with anticancer drugs, the most common intervention for the development of cardiotoxicity is discontinuation of the causative drug. Individual risk factors, such as risk of arrhythmia re-emergence, are considered when deciding final courses of action. Adjacent devices, such as pacemakers, or ablation therapy may also be considered as alternatives to medical treatment for the primary arrhythmia. [ 17 ]
The treatment of torsade de pointes is typically with intravenous magnesium sulfate, which helps stabilize cardiac membranes. [ 18 ] For ventricular fibrillation cases, either/or defibrillation , amiodarone , or epinephrine is used dependent on the ACLS algorithm. [ 19 ]
In the last years, in silico models have aided scientists and clinicians to cure several diseases. [ 20 ] Computational modeling in particular has helped scientists to alter parameters that otherwise could have not been investigated. [ 20 ]
In the field of electrophysiology , pharmacological cardiotoxicity can be carried out by leveraging specific computational models. Recently, it has become possible to analyze the pharmacological effect on atria and ventricles separately. [ 21 ] [ 22 ]
Since the two cardiac chambers are very different each other and play a key role both on a functional and anatomical basis, suitable computational models have to be accounted for to describe their different behavior. During the years, several models have been developed to best characterize and replicate the cellular action potential behavior of the most relevant anatomical region of the heart, such as Courtemanche model for atria or O'Hara model for ventricles. [ 21 ] [ 22 ]
In this way, it has been possible to create a virtual cellular population of cardiomyocytes and vary their conductances that are related to the main ionic currents which contribute to the action potential morphology, and is reflective of a specific anatomical region of the heart. [ 23 ] [ 24 ]
In order to create a stable population of cellular action potentials , several biomarkers have been developed to best characterize the instability of cellular action potentials. Examples of biomarkers reported include: [ 23 ]
A P D 90 = t 90 − t 0 {\displaystyle APD_{90}=t_{90}-t_{0}}
A P D 50 = t 50 − t 0 {\displaystyle APD_{50}=t_{50}-t_{0}}
A P D 20 = t 20 − t 0 {\displaystyle APD_{20}=t_{20}-t_{0}}
T r i a n g u l a t i o n = A P D 90 − A P D 50 {\displaystyle Triangulation=APD_{90}-APD_{50}}
A P A = V M a x − V 0 {\displaystyle APA=V_{Max}-V_{0}}
Once the cellular population is stable, all action potential are then compared to physiological data related to the most relevant anatomical regions to appropriately filter the action potential, aiming to consider just the physiologically relevant ones. [ 26 ]
At the atrial level, clusterization occurs with data associated to: [ 26 ]
According to pharmacokinetic and pharmacodynamic ideals, pharmacological action is integrated in the model. By means of specific electrical stimuli protocols, [ 27 ] the pharmacological effect of a new drug can be investigated in a completely safe, and controlled computational environment, providing preliminary important considerations concerning the cardiotoxicity of new pharmacological compounds. [ 28 ]
According to the outcome of the simulations, several aspects can be investigated to identify the proarrhythmicity of a new pharmacological compound. [ 29 ] [ 30 ] The typical changes, known as repolarization abnormalities, that are considered pro-arrhythmic include: [ 30 ]
Simulation can be carried out at different effective plasmatic therapeutic level of the drugs to identify the level at which cardiotoxicity cannot be neglected. The data collected could be finally managed to create a score system aimed to define the torsadogenic risk, namely the risk of inducing torsade de pointes, of the new drugs. [ 31 ] [ 32 ]
A possible torsade de point risk score to assess cardiotoxicity could be: [ 32 ] T d P R S = ∑ c ( W c ⋅ n R A c ) N ⋅ ∑ c W c ) {\displaystyle TdPRS={\frac {\sum _{c}(W_{c}\cdot nRA_{c})}{N\cdot \sum _{c}W_{c})}}}
where ∑ c {\displaystyle \sum _{c}} is the sum of all concentrations, [C] is the concentration taken into account, W c = E F T P C [ C ] {\displaystyle W_{c}={\frac {EFTPC}{[C]}}} , N {\displaystyle N} is the total number of models in the population, and n R A c {\displaystyle nRA_{c}} represents the number of models showing repolarization abnormalities. [ 32 ]
More detailed computation simulations can be carried out accounting for not cellular models, but taking into consideration the functional syncytium and enabling the cells to mutually interact, the so-called electrotonic coupling. [ 33 ]
In case of tissue simulation or in wider cases, such as in whole organ simulations, all the cellular models are note applicable anymore, and several corrections have to be made. Firstly, the governing equations can not be just ordinary differential equations , but a system of partial differential equations has to be accounted for. [ 34 ] A suitable choice may be the monodomain model: [ 35 ]
▽ ⋅ ( D ∇ V ) = ( C m ∂ V ∂ t + I i o n ( V , u ) ) {\displaystyle \triangledown \cdot (D\nabla V)=(C_{m}{\frac {\partial V}{\partial t}}+I_{ion}(V,u))} i n {\displaystyle in} Ω {\displaystyle \Omega }
n ⋅ ( D ∇ V ) = 0 {\displaystyle n\cdot (D\nabla V)=0} i n {\displaystyle in} ∂ Ω {\displaystyle \partial \Omega }
where D {\displaystyle D} is the effective conductivity tensor, C m {\displaystyle C_{m}} is the capacitance of the cellular membrane, I i o n {\displaystyle I_{ion}} the transmembrane ionic current, Ω {\displaystyle \Omega } and ∂ Ω {\displaystyle \partial \Omega } are the domain of interest and its boundary, respectively, with n {\displaystyle n} the outward boundary of ∂ Ω {\displaystyle \partial \Omega } . [ 35 ] | https://en.wikipedia.org/wiki/Pharmacological_cardiotoxicity |
Pharmacology is the science of drugs and medications, [ 1 ] including a substance's origin, composition, pharmacokinetics , pharmacodynamics , therapeutic use, and toxicology . More specifically, it is the study of the interactions that occur between a living organism and chemicals that affect normal or abnormal biochemical function. [ 2 ] If substances have medicinal properties, they are considered pharmaceuticals .
The field encompasses drug composition and properties, functions, sources, synthesis and drug design , molecular and cellular mechanisms , organ/systems mechanisms, signal transduction/cellular communication, molecular diagnostics , interactions , chemical biology , therapy, and medical applications and antipathogenic capabilities. The two main areas of pharmacology are pharmacodynamics and pharmacokinetics . Pharmacodynamics studies the effects of a drug on biological systems, and pharmacokinetics studies the effects of biological systems on a drug. In broad terms, pharmacodynamics discusses the chemicals with biological receptors , and pharmacokinetics discusses the absorption , distribution, metabolism , and excretion (ADME) of chemicals from the biological systems.
Pharmacology is not synonymous with pharmacy and the two terms are frequently confused. Pharmacology, a biomedical science , deals with the research, discovery, and characterization of chemicals which show biological effects and the elucidation of cellular and organismal function in relation to these chemicals. In contrast, pharmacy, a health services profession, is concerned with the application of the principles learned from pharmacology in its clinical settings; whether it be in a dispensing or clinical care role. In either field, the primary contrast between the two is their distinctions between direct-patient care, pharmacy practice, and the science-oriented research field, driven by pharmacology.
The word pharmacology is derived from Greek word φάρμακον , pharmakon , meaning "drug" or " poison ", together with another Greek word -λογία , logia with the meaning of "study of" or "knowledge of" [ 3 ] [ 4 ] (cf. the etymology of pharmacy ). Pharmakon is related to pharmakos , the ritualistic sacrifice or exile of a human scapegoat or victim in Ancient Greek religion .
The modern term pharmacon is used more broadly than the term drug because it includes endogenous substances, and biologically active substances which are not used as drugs. Typically it includes pharmacological agonists and antagonists , but also enzyme inhibitors (such as monoamine oxidase inhibitors). [ 5 ]
The origins of clinical pharmacology date back to the Middle Ages , with pharmacognosy and Avicenna 's The Canon of Medicine , Peter of Spain 's Commentary on Isaac , and John of St Amand 's Commentary on the Antedotary of Nicholas . [ 9 ] Early pharmacology focused on herbalism and natural substances, mainly plant extracts. Medicines were compiled in books called pharmacopoeias . Crude drugs have been used since prehistory as a preparation of substances from natural sources. However, the active ingredient of crude drugs are not purified and the substance is adulterated with other substances.
Traditional medicine varies between cultures and may be specific to a particular culture, such as in traditional Chinese , Mongolian , Tibetan and Korean medicine . However much of this has since been regarded as pseudoscience . Pharmacological substances known as entheogens may have spiritual and religious use and historical context.
In the 17th century, the English physician Nicholas Culpeper translated and used pharmacological texts. Culpeper detailed plants and the conditions they could treat. In the 18th century, much of clinical pharmacology was established by the work of William Withering . [ 10 ] Pharmacology as a scientific discipline did not further advance until the mid-19th century amid the great biomedical resurgence of that period. [ 11 ] Before the second half of the nineteenth century, the remarkable potency and specificity of the actions of drugs such as morphine , quinine and digitalis were explained vaguely and with reference to extraordinary chemical powers and affinities to certain organs or tissues. [ 12 ] The first pharmacology department was set up by Rudolf Buchheim in 1847, at University of Tartu, in recognition of the need to understand how therapeutic drugs and poisons produced their effects. [ 11 ] Subsequently, the first pharmacology department in England was set up in 1905 at University College London .
Pharmacology developed in the 19th century as a biomedical science that applied the principles of scientific experimentation to therapeutic contexts. [ 13 ] The advancement of research techniques propelled pharmacological research and understanding. The development of the organ bath preparation, where tissue samples are connected to recording devices, such as a myograph , and physiological responses are recorded after drug application, allowed analysis of drugs' effects on tissues. The development of the ligand binding assay in 1945 allowed quantification of the binding affinity of drugs at chemical targets. [ 14 ] Modern pharmacologists use techniques from genetics , molecular biology , biochemistry , and other advanced tools to transform information about molecular mechanisms and targets into therapies directed against disease, defects or pathogens, and create methods for preventive care, diagnostics, and ultimately personalized medicine .
The discipline of pharmacology can be divided into many sub disciplines each with a specific focus.
Pharmacology can also focus on specific systems comprising the body. Divisions related to bodily systems study the effects of drugs in different systems of the body. These include neuropharmacology , in the central and peripheral nervous systems ; immunopharmacology in the immune system. Other divisions include cardiovascular , renal and endocrine pharmacology. Psychopharmacology is the study of the use of drugs that affect the psyche , mind and behavior (e.g. antidepressants) in treating mental disorders (e.g. depression). [ 15 ] [ 16 ] It incorporates approaches and techniques from neuropharmacology, animal behavior and behavioral neuroscience, and is interested in the behavioral and neurobiological mechanisms of action of psychoactive drugs. [ citation needed ] The related field of neuropsychopharmacology focuses on the effects of drugs at the overlap between the nervous system and the psyche.
Pharmacometabolomics , also known as pharmacometabonomics, is a field which stems from metabolomics , the quantification and analysis of metabolites produced by the body. [ 17 ] [ 18 ] It refers to the direct measurement of metabolites in an individual's bodily fluids, in order to predict or evaluate the metabolism of pharmaceutical compounds, and to better understand the pharmacokinetic profile of a drug. [ 17 ] [ 18 ] Pharmacometabolomics can be applied to measure metabolite levels following the administration of a drug, in order to monitor the effects of the drug on metabolic pathways. Pharmacomicrobiomics studies the effect of microbiome variations on drug disposition, action, and toxicity. [ 19 ] Pharmacomicrobiomics is concerned with the interaction between drugs and the gut microbiome . Pharmacogenomics is the application of genomic technologies to drug discovery and further characterization of drugs related to an organism's entire genome. [ citation needed ] For pharmacology regarding individual genes, pharmacogenetics studies how genetic variation gives rise to differing responses to drugs. [ citation needed ] Pharmacoepigenetics studies the underlying epigenetic marking patterns that lead to variation in an individual's response to medical treatment. [ 20 ]
Pharmacology can be applied within clinical sciences. Clinical pharmacology is the application of pharmacological methods and principles in the study of drugs in humans. [ 21 ] An example of this is posology, which is the study of dosage of medicines. [ 22 ]
Pharmacology is closely related to toxicology . Both pharmacology and toxicology are scientific disciplines that focus on understanding the properties and actions of chemicals. [ 23 ] However, pharmacology emphasizes the therapeutic effects of chemicals, usually drugs or compounds that could become drugs, whereas toxicology is the study of chemical's adverse effects and risk assessment. [ 23 ]
Pharmacological knowledge is used to advise pharmacotherapy in medicine and pharmacy .
Drug discovery is the field of study concerned with creating new drugs. It encompasses the subfields of drug design and development . [ citation needed ] Drug discovery starts with drug design, which is the inventive process of finding new drugs. [ 24 ] In the most basic sense, this involves the design of molecules that are complementary in shape and charge to a given biomolecular target. [ 25 ] After a lead compound has been identified through drug discovery, drug development involves bringing the drug to the market. [ citation needed ] Drug discovery is related to pharmacoeconomics , which is the sub-discipline of health economics that considers the value of drugs. [ 26 ] [ 27 ] Pharmacoeconomics evaluates the cost and benefits of drugs in order to guide optimal healthcare resource allocation. [ 28 ] The techniques used for the discovery , formulation , manufacturing and quality control of drugs discovery is studied by pharmaceutical engineering , a branch of engineering . [ 29 ] Safety pharmacology specialises in detecting and investigating potential undesirable effects of drugs. [ 30 ]
Development of medication is a vital concern to medicine , but also has strong economical and political implications. To protect the consumer and prevent abuse, many governments regulate the manufacture, sale, and administration of medication. In the United States , the main body that regulates pharmaceuticals is the Food and Drug Administration ; they enforce standards set by the United States Pharmacopoeia . In the European Union , the main body that regulates pharmaceuticals is the European Medicines Agency (EMA) , and they enforce standards set by the European Pharmacopoeia .
The metabolic stability and the reactivity of a library of candidate drug compounds have to be assessed for drug metabolism and toxicological studies. Many methods have been proposed for quantitative predictions in drug metabolism; one example of a recent computational method is SPORCalc. [ 31 ] A slight alteration to the chemical structure of a medicinal compound could alter its medicinal properties, depending on how the alteration relates to the structure of the substrate or receptor site on which it acts: this is called the structural activity relationship (SAR). When a useful activity has been identified, chemists will make many similar compounds called analogues, to try to maximize the desired medicinal effect(s). This can take anywhere from a few years to a decade or more, and is very expensive. [ 32 ] One must also determine how safe the medicine is to consume, its stability in the human body and the best form for delivery to the desired organ system, such as tablet or aerosol. After extensive testing, which can take up to six years, the new medicine is ready for marketing and selling. [ 32 ]
Because of these long timescales, and because out of every 5000 potential new medicines typically only one will ever reach the open market, this is an expensive way of doing things, often costing over 1 billion dollars. To recoup this outlay pharmaceutical companies may do a number of things: [ 32 ]
The inverse benefit law describes the relationship between a drugs therapeutic benefits and its marketing.
When designing drugs, the placebo effect must be considered to assess the drug's true therapeutic value.
Drug development uses techniques from medicinal chemistry to chemically design drugs. This overlaps with the biological approach of finding targets and physiological effects.
Pharmacology can be studied in relation to wider contexts than the physiology of individuals. For example, pharmacoepidemiology concerns the variations of the effects of drugs in or between populations, it is the bridge between clinical pharmacology and epidemiology . [ 33 ] [ 34 ] Pharmacoenvironmentology or environmental pharmacology is the study of the effects of used pharmaceuticals and personal care products (PPCPs) on the environment after their elimination from the body. [ 35 ] Human health and ecology are intimately related so environmental pharmacology studies the environmental effect of drugs and pharmaceuticals and personal care products in the environment . [ 36 ]
Drugs may also have ethnocultural importance, so ethnopharmacology studies the ethnic and cultural aspects of pharmacology. [ 37 ]
Photopharmacology is an emerging approach in medicine in which drugs are activated and deactivated with light . The energy of light is used to change for shape and chemical properties of the drug, resulting in different biological activity. [ 38 ] This is done to ultimately achieve control when and where drugs are active in a reversible manner, to prevent side effects and pollution of drugs into the environment. [ 39 ] [ 40 ]
The study of chemicals requires intimate knowledge of the biological system affected. With the knowledge of cell biology and biochemistry increasing, the field of pharmacology has also changed substantially. It has become possible, through molecular analysis of receptors , to design chemicals that act on specific cellular signaling or metabolic pathways by affecting sites directly on cell-surface receptors (which modulate and mediate cellular signaling pathways controlling cellular function).
Chemicals can have pharmacologically relevant properties and effects. Pharmacokinetics describes the effect of the body on the chemical (e.g. half-life and volume of distribution ), and pharmacodynamics describes the chemical's effect on the body (desired or toxic ).
Pharmacology is typically studied with respect to particular systems, for example endogenous neurotransmitter systems . The major systems studied in pharmacology can be categorised by their ligands and include acetylcholine , adrenaline , glutamate , GABA , dopamine , histamine , serotonin , cannabinoid and opioid .
Molecular targets in pharmacology include receptors , enzymes and membrane transport proteins . Enzymes can be targeted with enzyme inhibitors . Receptors are typically categorised based on structure and function. Major receptor types studied in pharmacology include G protein coupled receptors , ligand gated ion channels and receptor tyrosine kinases .
Network pharmacology is a subfield of pharmacology that combines principles from pharmacology, systems biology, and network analysis to study the complex interactions between drugs and targets (e.g., receptors or enzymes etc.) in biological systems. The topology of a biochemical reaction network determines the shape of drug dose-response curve [ 41 ] as well as the type of drug-drug interactions, [ 42 ] thus can help designing efficient and safe therapeutic strategies. The topology Network pharmacology utilizes computational tools and network analysis algorithms to identify drug targets, predict drug-drug interactions, elucidate signaling pathways, and explore the polypharmacology of drugs.
Pharmacodynamics is defined as how the body reacts to the drugs. Pharmacodynamics theory often investigates the binding affinity of ligands to their receptors. Ligands can be agonists , partial agonists or antagonists at specific receptors in the body. Agonists bind to receptors and produce a biological response, a partial agonist produces a biological response lower than that of a full agonist, antagonists have affinity for a receptor but do not produce a biological response.
The ability of a ligand to produce a biological response is termed efficacy , in a dose-response profile it is indicated as percentage on the y-axis, where 100% is the maximal efficacy (all receptors are occupied).
Binding affinity is the ability of a ligand to form a ligand-receptor complex either through weak attractive forces (reversible) or covalent bond (irreversible), therefore efficacy is dependent on binding affinity.
Potency of drug is the measure of its effectiveness, EC 50 is the drug concentration of a drug that produces an efficacy of 50% and the lower the concentration the higher the potency of the drug therefore EC 50 can be used to compare potencies of drugs.
Medication is said to have a narrow or wide therapeutic index , certain safety factor or therapeutic window . This describes the ratio of desired effect to toxic effect. A compound with a narrow therapeutic index (close to one) exerts its desired effect at a dose close to its toxic dose. A compound with a wide therapeutic index (greater than five) exerts its desired effect at a dose substantially below its toxic dose. Those with a narrow margin are more difficult to dose and administer, and may require therapeutic drug monitoring (examples are warfarin , some antiepileptics , aminoglycoside antibiotics ). Most anti- cancer drugs have a narrow therapeutic margin: toxic side-effects are almost always encountered at doses used to kill tumors .
The effect of drugs can be described with Loewe additivity which is one of several common reference models. [ 42 ]
Other models include the Hill equation , Cheng-Prusoff equation and Schild regression .
Pharmacokinetics is the study of the bodily absorption, distribution, metabolism, and excretion of drugs. [ 43 ]
When describing the pharmacokinetic properties of the chemical that is the active ingredient or active pharmaceutical ingredient (API), pharmacologists are often interested in L-ADME :
Drug metabolism is assessed in pharmacokinetics and is important in drug research and prescribing.
Pharmacokinetics is the movement of the drug in the body, it is usually described as 'what the body does to the drug' the physico-chemical properties of a drug will affect the rate and extent of absorption, extent of distribution, metabolism and elimination. The drug needs to have the appropriate molecular weight, polarity etc. in order to be absorbed, the fraction of a drug that reaches the systemic circulation is termed bioavailability, this is simply a ratio of the peak plasma drug levels after oral administration and the drug concentration after an IV administration (first pass effect is avoided and therefore no amount drug is lost). A drug must be lipophilic (lipid soluble) in order to pass through biological membranes because biological membranes are made up of a lipid bilayer (phospholipids etc.). Once the drug reaches the blood circulation it is then distributed throughout the body and being more concentrated in highly perfused organs.
In the United States , the Food and Drug Administration (FDA) is responsible for creating guidelines for the approval and use of drugs. The FDA requires that all approved drugs fulfill two requirements:
Gaining FDA approval usually takes several years. Testing done on animals must be extensive and must include several species to help in the evaluation of both the effectiveness and toxicity of the drug. The dosage of any drug approved for use is intended to fall within a range in which the drug produces a therapeutic effect or desired outcome. [ 44 ]
The safety and effectiveness of prescription drugs in the U.S. are regulated by the federal Prescription Drug Marketing Act of 1987 .
The Medicines and Healthcare products Regulatory Agency (MHRA) has a similar role in the UK.
Medicare Part D is a prescription drug plan in the U.S.
The Prescription Drug Marketing Act (PDMA) is an act related to drug policy.
Prescription drugs are drugs regulated by legislation.
The International Union of Basic and Clinical Pharmacology , Federation of European Pharmacological Societies and European Association for Clinical Pharmacology and Therapeutics are organisations representing standardisation and regulation of clinical and scientific pharmacology.
Systems for medical classification of drugs with pharmaceutical codes have been developed. These include the National Drug Code (NDC), administered by Food and Drug Administration ; [ 45 ] Drug Identification Number (DIN), administered by Health Canada under the Food and Drugs Act ; Hong Kong Drug Registration , administered by the Pharmaceutical Service of the Department of Health (Hong Kong) and National Pharmaceutical Product Index in South Africa. Hierarchical systems have also been developed, including the Anatomical Therapeutic Chemical Classification System (AT, or ATC/DDD), administered by World Health Organization ; Generic Product Identifier (GPI), a hierarchical classification number published by MediSpan and SNOMED , C axis. Ingredients of drugs have been categorised by Unique Ingredient Identifiers .
The study of pharmacology overlaps with biomedical sciences and is the study of the effects of drugs on living organisms. Pharmacological research can lead to new drug discoveries, and promote a better understanding of human physiology . Students of pharmacology must have a detailed working knowledge of aspects in physiology, pathology, and chemistry. They may also require knowledge of plants as sources of pharmacologically active compounds. [ 37 ] Modern pharmacology is interdisciplinary and involves biophysical and computational sciences, and analytical chemistry. A pharmacist needs to be well-equipped with knowledge on pharmacology for application in pharmaceutical research or pharmacy practice in hospitals or commercial organisations selling to customers. Pharmacologists, however, usually work in a laboratory undertaking research or development of new products. Pharmacological research is important in academic research (medical and non-medical), private industrial positions, science writing, scientific patents and law, consultation, biotech and pharmaceutical employment, the alcohol industry, food industry, forensics/law enforcement, public health, and environmental/ecological sciences. Pharmacology is often taught to pharmacy and medicine students as part of a Medical School curriculum. | https://en.wikipedia.org/wiki/Pharmacology |
The pharmacology of ethanol involves both pharmacodynamics (how it affects the body) and pharmacokinetics (how the body processes it). In the body, ethanol primarily affects the central nervous system, acting as a depressant and causing sedation, relaxation, and decreased anxiety. The complete list of mechanisms remains an area of research, but ethanol has been shown to affect ligand-gated ion channels, particularly the GABA A receptor .
After oral ingestion, ethanol is absorbed via the stomach and intestines into the bloodstream. Ethanol is highly water-soluble and diffuses passively throughout the entire body, including the brain. Soon after ingestion, it begins to be metabolized, 90% or more by the liver. One standard drink is sufficient to almost completely saturate the liver's capacity to metabolize alcohol. [ citation needed ] The main metabolite is acetaldehyde, a toxic carcinogen. Acetaldehyde is then further metabolized into ionic acetate by the enzyme aldehyde dehydrogenase (ALDH). Acetate is not carcinogenic and has low toxicity, [ 9 ] but has been implicated in causing hangovers . [ 10 ] [ 11 ] Acetate is further broken down into carbon dioxide and water and eventually eliminated from the body through urine and breath. 5 to 10% of ethanol is excreted unchanged in the breath, urine, and sweat.
Beginning with the Gin Craze , excessive drinking and drunkenness developed into a major problem for public health. [ 12 ] [ 13 ] In 1874, Francis E. Anstie 's experiments showed that the amounts of alcohol eliminated unchanged in breath, urine, sweat, and feces were negligible compared to the amount ingested, suggesting it was oxidized within the body. [ 14 ] In 1902, Atwater and Benedict estimated that alcohol yielded 7.1 kcal of energy per gram consumed and 98% was metabolized. [ 15 ] In 1922, Widmark published his method for analyzing the alcohol content of fingertip samples of blood. [ 16 ] Through the 1930s, Widmark conducted numerous studies and formulated the basic principles of ethanol pharmacokinetics for forensic purposes, [ 17 ] including the eponymous Widmark equation. In 1980, Watson et al. proposed updated equations based on total body water instead of body weight. [ 18 ] The TBW equations have been found to be significantly more accurate due to rising levels of obesity worldwide. [ 19 ]
The principal mechanism of action for ethanol has proven elusive and remains not fully understood. [ 20 ] [ 21 ] Identifying molecular targets for ethanol is unusually difficult, in large part due to its unique biochemical properties. [ 21 ] Specifically, ethanol is a very low molecular weight compound and is of exceptionally low potency in its actions, causing effects only at very high ( millimolar mM ) concentrations. [ 21 ] [ 22 ] For these reasons, it is not possible to employ traditional biochemical techniques to directly assess the binding of ethanol to receptors or ion channels . [ 21 ] [ 22 ] Instead, researchers have had to rely on functional studies to elucidate the actions of ethanol. [ 21 ] Even at present, no binding sites have been unambiguously identified and established for ethanol. Studies have published strong evidence for certain functions of ethanol in specific systems, but other laboratories have found that these findings do not replicate with different neuronal types and heterologously expressed receptors. [ 23 ] Thus, there remains lingering doubt about the mechanisms of ethanol listed here, even for the GABA A receptor, the most-studied mechanism. [ 24 ]
In the past, alcohol was believed to be a non-specific pharmacological agent affecting many neurotransmitter systems in the brain, [ 25 ] but progress has been made over the last few decades. [ 26 ] [ 21 ] It appears that it affects ion channels, in particular ligand-gated ion channels , to mediate its effects in the CNS. [ 20 ] [ 26 ] [ 27 ] [ 21 ] In some systems, these effects are facilitatory, and in others inhibitory. Moreover, although it has been established that ethanol modulates ion channels to mediate its effects, [ 27 ] ion channels are complex proteins, and their interactions and functions are complicated by diverse subunit compositions and regulation by conserved cellular signals (e.g. signaling lipids). [ 20 ] [ 21 ]
Alcohol is also converted into phosphatidylethanol (PEth, an unnatural lipid metabolite) by phospholipase D2 . This metabolite competes with PIP 2 agonist sites on lipid-gated ion channels . [ 28 ] [ 29 ] The result of these direct effects is a wave of further indirect effects involving a variety of other neurotransmitter and neuropeptide systems. [ 25 ] This presents a novel indirect mechanism and suggests that a metabolite, not the ethanol itself, could cause the behavioural or symptomatic effects of alcohol intoxication. Many of the primary targets of ethanol are known to bind PIP 2 including GABA A receptors, [ 30 ] but the role of PEth needs to be investigated further.
Ethanol has been reported to possess the following actions in functional assays at varying concentrations: [ 22 ]
Many of these actions have been found to occur only at very high concentrations that may not be pharmacologically significant at recreational doses of ethanol, and it is unclear how or to what extent each of the individual actions is involved in the effects of ethanol. [ 21 ] Some of the actions of ethanol on ligand-gated ion channels, specifically the nicotinic acetylcholine receptors and the glycine receptor, are dose-dependent , with potentiation or inhibition occurring dependent on ethanol concentration. [ 22 ] This seems to be because the effects of ethanol on these channels are a summation of positive and negative allosteric modulatory actions. [ 22 ]
Ethanol has been found to enhance GABA A receptor-mediated currents in functional assays. [ 20 ] [ 21 ] Ethanol has long shown a similarity in its effects to positive allosteric modulators of the GABA A receptor like benzodiazepines , barbiturates , and various general anesthetics . [ 20 ] [ 21 ] Some of these effects include anxiolytic , anticonvulsant , sedative , and hypnotic effects, cognitive impairment, and motor incoordination. [ 46 ] In accordance, it was theorized and widely believed that the primary mechanism of action of ethanol is GABA A receptor positive allosteric modulation . [ 20 ] [ 21 ] However, other ion channels are involved in its effects as well. [ 26 ] [ 21 ] Although ethanol exhibits positive allosteric binding properties to GABA A receptors, its effects are limited to pentamers containing the δ-subunit rather than the γ-subunit. [ 21 ] Ethanol potentiates extrasynaptic δ subunit -containing GABA A receptors at behaviorally relevant (as low as 3 mM) concentrations, [ 20 ] [ 21 ] [ 47 ] but γ subunit receptors are enhanced only at far higher concentrations (> 100 mM) that are in excess of recreational concentrations (up to 50 mM). [ 20 ] [ 21 ] [ 48 ]
GABA A receptors containing the δ-subunit have been shown to be located exterior to the synapse and are involved with tonic inhibition rather than its γ-subunit counterpart, which is involved in phasic inhibition. [ 46 ] The δ-subunit has been shown to be able to form the allosteric binding site which makes GABA A receptors containing the δ-subunit more sensitive to ethanol concentrations, even to moderate social ethanol consumption levels (30mM). [ 49 ] While it has been shown by Santhakumar et al. that GABA A receptors containing the δ-subunit are sensitive to ethanol modulation, depending on subunit combinations receptors could be more or less sensitive to ethanol. [ 50 ] It has been shown that GABA A receptors that contain both δ and β3-subunits display increased sensitivity to ethanol. [ 21 ] One such receptor that exhibits ethanol insensitivity is α3-β6-δ GABA A . [ 50 ] It has also been shown that subunit combination is not the only thing that contributes to ethanol sensitivity. Location of GABA A receptors within the synapse may also contribute to ethanol sensitivity. [ 46 ]
Ro15-4513 , a close analogue of the benzodiazepine antagonist flumazenil (Ro15-1788), has been found to bind to the same site as ethanol and to competitively displace it in a saturable manner. [ 21 ] [ 47 ] In addition, Ro15-4513 blocked the enhancement of δ subunit-containing GABA A receptor currents by ethanol in vitro . [ 21 ] In accordance, the drug has been found to reverse many of the behavioral effects of low-to-moderate doses of ethanol in rodents, including its effects on anxiety, memory, motor behavior, and self-administration. [ 21 ] [ 47 ] Taken together, these findings suggest a binding site for ethanol on subpopulations of the GABA A receptor with specific subunit compositions via which it interacts with and potentiates the receptor. [ 20 ] [ 21 ] [ 47 ] [ 51 ]
Research indicates ethanol is involved in the inhibition of L-type calcium channels. One study showed the nature of ethanol binding to L-type calcium channels is according to first-order kinetics with a Hill coefficient around 1. This indicates ethanol binds independently to the channel, expressing noncooperative binding . [ 41 ] Early studies showed a link between calcium and the release of vasopressin by the secondary messenger system . [ 52 ] Vasopressin levels are reduced after the ingestion of alcohol. [ 53 ] The lower levels of vasopressin from the consumption of alcohol have been linked to ethanol acting as an antagonist to voltage-gated calcium channels (VGCCs). Studies conducted by Treistman et al. in the aplysia confirm inhibition of VGCC by ethanol. Voltage clamp recordings have been done on the aplysia neuron. VGCCs were isolated and calcium current was recorded using patch clamp technique having ethanol as a treatment. Recordings were replicated at varying concentrations (0, 10, 25, 50, and 100 mM) at a voltage clamp of +30 mV. Results showed calcium current decreased as concentration of ethanol increased. [ 54 ] Similar results have shown to be true in single-channel recordings from isolated nerve terminal of rats that ethanol does in fact block VGCCs. [ 55 ]
Studies done by Katsura et al. in 2006 on mouse cerebral cortical neurons, show the effects of prolonged ethanol exposure. Neurons were exposed to sustained ethanol concentrations of 50 mM for 3 days in vitro . Western blot and protein analysis were conducted to determine the relative amounts of VGCC subunit expression. α1C, α1D, and α2/δ1 subunits showed an increase of expression after sustained ethanol exposure. However, the β4 subunit showed a decrease. Furthermore, α1A, α1B, and α1F subunits did not alter in their relative expression. Thus, sustained ethanol exposure may participate in the development of ethanol dependence in neurons. [ 56 ]
Other experiments done by Malysz et al. have looked into ethanol effects on voltage-gated calcium channels on detrusor smooth muscle cells in guinea pigs. Perforated patch clamp technique was used having intracellular fluid inside the pipette and extracellular fluid in the bath with added 0.3% vol/vol (about 50-mM) ethanol. Ethanol decreased the Ca 2+ current in DSM cells and induced muscle relaxation. Ethanol inhibits VGCCs and is involved in alcohol-induced relaxation of the urinary bladder. [ 57 ]
The reinforcing effects of alcohol consumption are mediated by acetaldehyde generated by catalase and other oxidizing enzymes such as cytochrome P-4502E1 in the brain. [ 60 ] Although acetaldehyde has been associated with some of the adverse and toxic effects of ethanol, it appears to play a central role in the activation of the mesolimbic dopamine system . [ 45 ]
Ethanol's rewarding and reinforcing (i.e., addictive) properties are mediated through its effects on dopamine neurons in the mesolimbic reward pathway , which connects the ventral tegmental area to the nucleus accumbens (NAcc). [ 61 ] [ 62 ] One of ethanol's primary effects is the allosteric inhibition of NMDA receptors and facilitation of GABA A receptors (e.g., enhanced GABA A receptor-mediated chloride flux through allosteric regulation of the receptor). [ 31 ] At high doses, ethanol inhibits most ligand-gated ion channels and voltage-gated ion channels in neurons as well. [ 31 ]
With acute alcohol consumption, dopamine is released in the synapses of the mesolimbic pathway, in turn heightening activation of postsynaptic D 1 receptors . [ 61 ] [ 62 ] The activation of these receptors triggers postsynaptic internal signaling events through protein kinase A , which ultimately phosphorylate cAMP response element binding protein (CREB), inducing CREB-mediated changes in gene expression . [ 61 ] [ 62 ]
With chronic alcohol intake, consumption of ethanol similarly induces CREB phosphorylation through the D 1 receptor pathway, but it also alters NMDA receptor function through phosphorylation mechanisms; [ 61 ] [ 62 ] an adaptive downregulation of the D 1 receptor pathway and CREB function occurs as well. [ 61 ] [ 62 ] Chronic consumption is also associated with an effect on CREB phosphorylation and function via postsynaptic NMDA receptor signaling cascades through a MAPK/ERK pathway and CAMK -mediated pathway. [ 62 ] These modifications to CREB function in the mesolimbic pathway induce expression (i.e., increase gene expression) of ΔFosB in the NAcc , [ 62 ] where ΔFosB is the "master control protein" that, when overexpressed in the NAcc, is necessary and sufficient for the development and maintenance of an addictive state (i.e., its overexpression in the nucleus accumbens produces and then directly modulates compulsive alcohol consumption). [ 62 ] [ 63 ] [ 64 ] [ 65 ]
Recreational concentrations of ethanol are typically in the range of 1 to 50 mM. [ 48 ] [ 20 ] Very low concentrations of 1 to 2 mM ethanol produce zero or undetectable effects except in alcohol-naive individuals. [ 48 ] Slightly higher levels of 5 to 10 mM, which are associated with light social drinking, produce measurable effects including changes in visual acuity, decreased anxiety, and modest behavioral disinhibition. [ 48 ] Further higher levels of 15 to 20 mM result in a degree of sedation and motor incoordination that is contraindicated with the operation of motor vehicles. [ 48 ] In jurisdictions in the U.S., maximum blood alcohol levels for legal driving are about 17 to 22 mM. [ 67 ] [ 68 ] In the upper range of recreational ethanol concentrations of 20 to 50 mM, depression of the central nervous system is more marked, with effects including complete drunkenness, profound sedation, amnesia, emesis, hypnosis, and eventually unconsciousness. [ 48 ] [ 67 ] Levels of ethanol above 50 mM are not typically experienced by normal individuals and hence are not usually physiologically relevant; however, such levels – ranging from 50 to 100 mM – may be experienced by alcoholics with high tolerance to ethanol. [ 48 ] Concentrations above this range, specifically in the range of 100 to 200 mM, would cause death in all people except alcoholics. [ 48 ]
As drinking increases, people become sleepy or fall into a stupor . After a very high level of consumption [ vague ] , the respiratory system becomes depressed and the person will stop breathing. Comatose patients may aspirate their vomit (resulting in vomitus in the lungs, which may cause "drowning" and later pneumonia if survived). CNS depression and impaired motor coordination along with poor judgment increase the likelihood of accidental injury occurring. It is estimated that about one-third of alcohol-related deaths are due to accidents and another 14% are from intentional injury. [ 69 ]
In addition to respiratory failure and accidents caused by its effects on the central nervous system, alcohol causes significant metabolic derangements. Hypoglycaemia occurs due to ethanol's inhibition of gluconeogenesis , especially in children, and may cause lactic acidosis , ketoacidosis , and acute kidney injury . Metabolic acidosis is compounded by respiratory failure. Patients may also present with hypothermia.
The pharmacokinetics of ethanol are well characterized by the ADME acronym (absorption, distribution, metabolism, excretion). Besides the dose ingested, factors such as the person's total body water , speed of drinking, the drink's nutritional content, and the contents of the stomach all influence the profile of blood alcohol content (BAC) over time. Breath alcohol content (BrAC) and BAC have similar profile shapes, so most forensic pharmacokinetic calculations can be done with either. Relatively few studies directly compare BrAC and BAC within subjects and characterize the difference in pharmacokinetic parameters. Comparing arterial and venous BAC, arterial BAC is higher during the absorption phase and lower in the postabsorptive declining phase. [ 13 ]
All organisms produce alcohol in small amounts by several pathways, primarily through fatty acid synthesis , [ 70 ] glycerolipid metabolism, [ 71 ] and bile acid biosynthesis pathways. [ 72 ] Fermentation is a biochemical process during which yeast and certain bacteria convert sugars to ethanol, carbon dioxide, as well as other metabolic byproducts. [ 73 ] [ 74 ] The average human digestive system produces approximately 3 g of ethanol per day through fermentation of its contents. [ 75 ] Such production generally does not have any forensic significance because the ethanol is broken down before significant intoxication ensues. These trace amounts of alcohol range from 0.1 to 0.3 μg/mL in the blood of healthy humans, with some measurements as high as 1.6 μg/mL (0.002 g/L). [ 76 ]
Auto-brewery syndrome is a condition characterized by significant fermentation of ingested carbohydrates within the body. In rare cases, intoxicating quantities of ethanol may be produced, especially after eating meals. Claims of endogenous fermentation have been attempted as a defense against drunk driving charges, some of which have been successful, but the condition is under-researched. [ 77 ]
Ethanol is most commonly ingested by mouth, [ 2 ] but other routes of administration are possible, such as inhalation , enema , or by intravenous injection . [ 4 ] [ 78 ] With oral administration , the ethanol is absorbed into the portal venous blood through the mucosa of the gastrointestinal tract, such as in the oral cavity, stomach, duodenum, and jejunum. [ 13 ] The oral bioavailability of ethanol is quite high, with estimates ranging from 80% at a minimum [ 2 ] [ 3 ] to 94%-96%. [ 79 ] The ethanol molecule is small and uncharged, and easily crosses biological membranes by passive diffusion. [ 80 ] The absorption rate of ethanol is typically modeled as a first-order kinetic process depending on the concentration gradient and specific membrane. The rate of absorption is fastest in the duodenum and jejunum, owing to the larger absorption surface area provided by the villi and microvilli of the small intestines. Gastric emptying is therefore an important consideration when estimating the overall rate of absorption in most scenarios; [ 13 ] the presence of a meal in the stomach delays gastric emptying, [ 4 ] [ 78 ] and absorption of ethanol into the blood is consequently slower. [ 81 ] Due to irregular gastric emptying patterns, the rate of absorption of ethanol is unpredictable, varying significantly even between drinking occasions. [ 13 ] In experiments, aqueous ethanol solutions have been given intravenously or rectally to avoid this variation. [ 13 ] The delay in ethanol absorption caused by food is similar regardless of whether food is consumed just before, at the same time, or just after ingestion of ethanol. [ 4 ] The type of food, whether fat , carbohydrates , or protein , also is of little importance. [ 78 ] Not only does food slow the absorption of ethanol, but it also reduces the bioavailability of ethanol, resulting in lower circulating concentrations. [ 4 ]
Regarding inhalation, early experiments with animals showed that it was possible to produce significant BAC levels comparable to those obtained by injection, by forcing the animal to breathe alcohol vapor. [ 82 ] In humans, concentrations of ethanol in air above 10 mg/L caused initial coughing and smarting of the eyes and nose, which went away after adaptation. 20 mg/L was just barely tolerable. Concentrations above 30 mg/L caused continuous coughing and tears, and concentrations above 40 mg/L were described as intolerable, suffocating, and impossible to bear for even short periods. Breathing air with concentration of 15 mg/L ethanol for 3 hours resulted in BACs from 0.2 to 4.5 g/L, depending on breathing rate. [ 83 ] It is not a particularly efficient or enjoyable method of becoming intoxicated. [ 4 ]
Ethanol is not absorbed significantly through intact skin. The steady state flux is 0.08 μmol/cm 2 /hr . [ 84 ] Applying a 70% ethanol solution to a skin area of 1000 cm 2 for 1 hr would result in approximately 0.1 g of ethanol being absorbed. [ 85 ] The substantially increased levels of ethanol in the blood reported for some experiments are likely due to inadvertent inhalation. [ 4 ] A study that did not prevent respiratory uptake found that applying 200 mL of hand disinfectant containing 95% w/w ethanol (150 g ethanol total) over the course of 80 minutes in a 3-minutes-on 5-minutes-off pattern resulted in the median BAC among volunteers peaking 30 minutes after the last application at 17.5 mg/L (0.00175%). This BAC roughly corresponds to drinking one gram of pure ethanol. [ 86 ] Ethanol is rapidly absorbed through cut or damaged skin, with reports of ethanol intoxication and fatal poisoning. [ 87 ]
The timing of peak blood concentration varies depends on the type of alcoholic drink: [ 88 ]
Also, carbonated alcoholic drinks seem to have a shorter onset compare to flat drinks in the same volume. One theory is that carbon dioxide in the bubbles somehow speeds the flow of alcohol into the intestines. [ 89 ]
Absorption is reduced by a large meal. Stress speeds up absorption. [ 81 ]
After absorption, the alcohol goes through the portal vein to the liver, then through the hepatic veins to the heart, then the pulmonary arteries to the lungs, then the pulmonary veins to the heart again, and then enters systemic circulation . [ 13 ] [ 90 ] Once in systematic circulation, ethanol distributes throughout the body, diffusing passively and crossing all biological membranes including the blood-brain barrier . [ 2 ] [ 78 ] At equilibrium, ethanol is present in all body fluids and tissues in proportion to their water content. Ethanol does not bind to plasma proteins or other biomolecules. [ 13 ] [ 2 ] [ 3 ] The rate of distribution depends on blood supply, [ 4 ] specifically the cross-sectional area of the local capillary bed and the blood flow per gram of tissue. [ 13 ] As such, ethanol rapidly affects the brain, liver, and kidneys , which have high blood flow. [ 2 ] Other tissues with lower circulation, such as skeletal muscles and bone , require more time for ethanol to distribute into. [ 4 ] [ 13 ] In rats, it takes around 10–15 minutes for tissue and venous blood to reach equilibrium. [ 91 ] Peak circulating levels of ethanol are usually reached within a range of 30 to 90 minutes of ingestion, with an average of 45 to 60 minutes. [ 4 ] [ 2 ] People who have fasted overnight have been found to reach peak ethanol concentrations more rapidly, at within 30 minutes of ingestion. [ 4 ]
The volume of distribution V d contributes about 15% of the uncertainty to Widmark's equation [ 92 ] and has been the subject of much research. Widmark originally used units of mass (g/kg) for EBAC, thus he calculated the apparent mass of distribution M d or mass of blood in kilograms. He fitted an equation M d = ρ m W {\displaystyle M_{d}=\rho _{m}W} of the body weight W in kg, finding an average rho-factor of 0.68 for men and 0.55 for women. This ρ m has units of dose per body weight (g/kg) divided by concentration (g/kg) and is therefore dimensionless. However, modern calculations use weight/volume concentrations (g/L) for EBAC, so Widmark's rho-factors must be adjusted for the density of blood, 1.055 g/mL. This ρ v = V d / W {\displaystyle \rho _{v}=V_{d}/W} has units of dose per body weight (g/kg) divided by concentration (g/L blood) - calculation gives values of 0.64 L/kg for men and 0.52 L/kg for women, lower than the original. [ 93 ] Newer studies have updated these values to population-average ρ v of 0.71 L/kg for men and 0.58 L/kg for women. But individual V d values may vary significantly - the 95% range for ρ v is 0.58-0.83 L/kg for males and 0.43-0.73 L/kg for females. [ 94 ] A more accurate method for calculating V d is to use total body water (TBW) - experiments have confirmed that alcohol distributes almost exactly in proportion to TBW within the Widmark model. [ 95 ] TBW may be calculated using body composition analysis or estimated using anthropometric formulas based on age, height, and weight. V d is then given by T B W kg / F water {\displaystyle TBW_{\text{kg}}/F_{\text{water}}} , where F water {\displaystyle F_{\text{water}}} is the water content of blood, approximately 0.825 w/v for men and 0.838 w/v for women. [ 96 ]
These calculations assume Widmark's zero-order model for the effects of metabolization, and assume that TBW is almost exactly the volume of distribution of ethanol. Using a more complex model that accounts for non-linear metabolism, Norberg found that V d was only 84-87% of TBW. [ 97 ] This finding was not reproduced in a newer study which found volumes of distribution similar to those in the literature. [ 79 ]
Several metabolic pathways exist:
The reaction from ethanol to carbon dioxide and water proceeds in at least 11 steps in humans. C 2 H 6 O (ethanol) is converted to C 2 H 4 O ( acetaldehyde ), then to C 2 H 4 O 2 ( acetic acid ), then to acetyl-CoA . Once acetyl-CoA is formed, it is free to enter directly into the citric acid cycle (TCA) and is converted to 2 CO 2 molecules in 8 reactions. The equations:
The Gibbs free energy is simply calculated from the free energy of formation of the product and reactants. [ 99 ] [ 100 ] If catabolism of alcohol goes all the way to completion, then there is a very exothermic event yielding some 1325 kJ/mol of energy. If the reaction stops part way through the metabolic pathways, which happens because acetic acid is excreted in the urine after drinking, then not nearly as much energy can be derived from alcohol, indeed, only 215.1 kJ/mol . At the very least, the theoretical limits on energy yield are determined to be −215.1 kJ/mol to −1 325 .6 kJ/mol . The first with NADH is endothermic, requiring 47.2 kJ/mol of alcohol, or about 3 molecules of adenosine triphosphate (ATP) per molecule of ethanol. [ original research? ]
Variations in genes influence alcohol metabolism and drinking behavior. [ 101 ] Certain amino acid sequences in the enzymes used to oxidize ethanol are conserved (unchanged) going back to the last common ancestor over 3.5 bya. [ 102 ] Evidence suggests that humans evolved the ability to metabolize dietary ethanol between 7 and 21 million years ago, in a common ancestor shared with chimpanzees and gorillas but not orangutans . [ 103 ] Gene variation in these enzymes can lead to variation in catalytic efficiency between individuals. Some individuals have less effective metabolizing enzymes of ethanol, and can experience more marked symptoms from ethanol consumption than others. [ 104 ] However, those having acquired alcohol tolerance have a greater quantity of these enzymes, and metabolize ethanol more rapidly. Specifically, ethanol has been observed to be cleared more quickly by regular drinkers than non-drinkers. [ 104 ]
Falsely high BAC readings may be seen in patients with kidney or liver disease or failure. Such persons also have impaired acetaldehyde dehydrogenase, which causes acetaldehyde levels to peak higher, producing more severe hangovers and other effects such as flushing and tachycardia. Conversely, members of certain ethnicities that traditionally did not use alcoholic beverages have lower levels of alcohol dehydrogenases and thus "sober up" very slowly but reach lower aldehyde concentrations and have milder hangovers. The rate of detoxification of alcohol can also be slowed by certain drugs which interfere with the action of alcohol dehydrogenases, notably aspirin , furfural (which may be found in fusel alcohol ), fumes of certain solvents , many heavy metals , and some pyrazole compounds. Also suspected of having this effect are cimetidine , ranitidine , and acetaminophen (paracetamol). [ citation needed ]
An "abnormal" liver with conditions such as hepatitis , cirrhosis , gall bladder disease, and cancer is likely to result in a slower rate of metabolism. People under 25 and women may process alcohol more slowly. [ 105 ]
Food such as fructose can increase the rate of alcohol metabolism. The effect can vary significantly from person to person, but a 100 g dose of fructose has been shown to increase alcohol metabolism by an average of 80%. In people with proteinuria and hematuria, fructose can cause falsely high BAC readings, due to kidney-liver metabolism. [ 106 ]
During a typical drinking session, approximately 90% of the metabolism of ethanol occurs in the liver. [ 4 ] [ 6 ] Alcohol dehydrogenase and aldehyde dehydrogenase are present at their highest concentrations (in liver mitochondria). [ 98 ] [ 107 ] But these enzymes are widely expressed throughout the body, such as in the stomach and small intestine . [ 2 ] Some alcohol undergoes a first pass of metabolism in these areas, before it ever enters the bloodstream. [ 90 ]
Under alcoholic conditions, the citric acid cycle is stalled by the oversupply of NADH derived from ethanol oxidation. The resulting backup of acetate shifts the reaction equilibrium for acetaldehyde dehydrogenase back towards acetaldehyde. Acetaldehyde subsequently accumulates and begins to form covalent bonds with cellular macromolecules, forming toxic adducts that, eventually, lead to death of the cell.
This same excess of NADH from ethanol oxidation causes the liver to move away from fatty acid oxidation, which produces NADH, towards fatty acid synthesis, which consumes NADH. This consequent lipogenesis is believed to account largely for the pathogenesis of alcoholic fatty liver disease .
In human embryos and fetuses, ethanol is not metabolized via ADH as ADH enzymes are not yet expressed to any significant quantity in human fetal liver (the induction of ADH only starts after birth, and requires years to reach adult levels). [ 108 ] Accordingly, the fetal liver cannot metabolize ethanol or other low molecular weight xenobiotics. In fetuses, ethanol is instead metabolized at much slower rates by different enzymes from the cytochrome P-450 superfamily (CYP), in particular by CYP2E1. The low fetal rate of ethanol clearance is responsible for the important observation that the fetal compartment retains high levels of ethanol long after ethanol has been cleared from the maternal circulation by the adult ADH activity in the maternal liver. [ 109 ] CYP2E1 expression and activity have been detected in various human fetal tissues after the onset of organogenesis (ca 50 days of gestation). [ 110 ] Exposure to ethanol is known to promote further induction of this enzyme in fetal and adult tissues. CYP2E1 is a major contributor to the so-called Microsomal Ethanol Oxidizing System (MEOS) [ 111 ] and its activity in fetal tissues is thought to contribute significantly to the toxicity of maternal ethanol consumption. [ 108 ] [ 112 ] In presence of ethanol and oxygen, CYP2E1 is known [ by whom? ] to release superoxide radicals and induce the oxidation of polyunsaturated fatty acids to toxic aldehyde products like 4-hydroxynonenal (HNE). [ citation needed ]
The concentration of alcohol in breast milk produced during lactation is closely correlated to the individual's blood alcohol content. [ 113 ]
Alcohol is removed from the bloodstream by a combination of metabolism, excretion, and evaporation. 90-98% of ingested ethanol is metabolized into carbon dioxide and water. [ 4 ] Around 5 to 10% of ethanol that is ingested is excreted unchanged in urine , breath , and sweat . [ 2 ] Transdermal alcohol that diffuses through the skin as insensible perspiration or is exuded as sweat (sensible perspiration) can be detected using wearable sensor technology [ 114 ] such as SCRAM ankle bracelet [ 115 ] or the more discreet ION Wearable. [ 116 ] Ethanol or its metabolites may be detectable in urine for up to 96 hours (3–5 days) after ingestion. [ 2 ]
Unlike most physiologically active materials, in typical recreational use, ethanol is removed from the bloodstream at an approximately constant rate (linear decay or zero-order kinetics ), rather than at a rate proportional to the current concentration ( exponential decay with a characteristic elimination half-life ). [ 6 ] [ 5 ] This is because typical doses of alcohol saturate the enzymes' capacity. In Widmark's model, the elimination rate from the blood, β , contributes 60% of the uncertainty. [ 92 ] Similarly to ρ , its value depends on the units used for blood. [ 93 ] β varies 58% by occasion and 42% between subjects; it is thus difficult to determine β precisely, and more practical to use a mean and a range of values. Typical elimination rates range from 10 to 34 mg/dL per hour, [ 6 ] [ 4 ] with Jones recommending the range 0.10 - 0.25 g/L/h for forensic purposes, for all subjects. [ 117 ] Earlier studies found mean elimination rates of 15 mg/dL per hour for men and 18 mg/dL per hour for women, [ 6 ] [ 4 ] but Jones found 0.148 g/L/h and 0.156 g/L/h respectively. Although the difference between sexes is statistically significant, it is small compared to the overall uncertainty, so Jones recommends using the value 0.15 for the mean for all subjects. [ 117 ] This mean rate is very roughly 8 grams of pure ethanol per hour (one British unit ). [ 118 ] Explanations for the gender difference are quite varied and include liver size, secondary effects of the volume of distribution, and sex-specific hormones. [ 119 ] A 2023 study using a more complex two-compartment model with M-M elimination kinetics, with data from 60 men and 12 women, found statistically small effects of gender on maximal elimination rate and excluded them from the final model. [ 79 ]
At concentrations below 0.15-0.20 g/L, alcohol is eliminated more slowly and the elimination rate more closely follows first-order kinetics. The overall behavior of the elimination rate is described well by Michaelis–Menten kinetics . This change in behavior was not noticed by Widmark because he could not analyze low BAC levels. [ 93 ] The rate of elimination of ethanol is also increased at very high concentrations, such as in overdose, again more closely following first-order kinetics , with an elimination half-life of about 4 or 4.5 hours (a clearance rate of approximately 6 L/hour/70 kg). This is thought to be due to increased activity of CYP2E1. [ 3 ] [ 2 ]
Eating food in proximity to drinking increases elimination rate significantly, mainly due to increased metabolism. [ 79 ]
In fasting volunteers, blood levels of ethanol increase proportionally with the dose of ethanol administered. [ 78 ] Peak blood alcohol concentrations may be estimated by dividing the amount of ethanol ingested by the body weight of the individual and correcting for water dilution. [ 4 ] For time-dependent calculations, Swedish professor Erik Widmark developed a model of alcohol pharmacokinetics in the 1920s. [ 120 ] The model corresponds to a single-compartment model with instantaneous absorption and zero-order kinetics for elimination. The model is most accurate when used to estimate BAC a few hours after drinking a single dose of alcohol in a fasted state, and can be within 20% CV of the true value. [ 121 ] [ 122 ] It is less accurate for BAC levels below 0.2 g/L (alcohol is not eliminated as quickly as predicted) and consumption with food (overestimating the peak BAC and time to return to zero). [ 123 ] [ 93 ] | https://en.wikipedia.org/wiki/Pharmacology_of_ethanol |
Pharmacometabolomics , also known as pharmacometabonomics , is a field which stems from metabolomics , the quantification and analysis of metabolites produced by the body. [ 1 ] [ 2 ] It refers to the direct measurement of metabolites in an individual's bodily fluids, in order to predict or evaluate the metabolism of pharmaceutical compounds, and to better understand the pharmacokinetic profile of a drug. [ 1 ] [ 2 ] Alternatively, pharmacometabolomics can be applied to measure metabolite levels following the administration of a pharmaceutical compound, in order to monitor the effects of the compound on certain metabolic pathways(pharmacodynamics). This provides detailed mapping of drug effects on metabolism and the pathways that are implicated in mechanism of variation of response to treatment. [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] In addition, the metabolic profile of an individual at baseline (metabotype) provides information about how individuals respond to treatment and highlights heterogeneity within a disease state. [ 8 ] All three approaches require the quantification of metabolites found in bodily fluids and tissue , such as blood or urine, and can be used in the assessment of pharmaceutical treatment options for numerous disease states.
Pharmacometabolomics is thought to provide information that complements that gained from other omics , namely genomics , transcriptomics , and proteomics . Looking at the characteristics of an individual down through these different levels of detail, there is an increasingly more accurate prediction of a person's ability to respond to a pharmaceutical compound. The genome , made up of 25 000 genes , can indicate possible errors in drug metabolism ; the transcriptome , made up of 85,000 transcripts, can provide information about which genes important in metabolism are being actively transcribed; and the proteome , >10,000,000 members, depicts which proteins are active in the body to carry out these functions. Pharmacometabolomics complements the omics with direct measurement of the products of all of these reactions, but with perhaps a relatively smaller number of members: that was initially projected to be approximately 2200 metabolites , [ 9 ] but could be a larger number when gut derived metabolites and xenobiotics are added to the list. Overall, the goal of pharmacometabolomics is to more closely predict or assess the response of an individual to a pharmaceutical compound, permitting continued treatment with the right drug or dosage depending on the variations in their metabolism and ability to respond to treatment. [ 1 ] [ 2 ] [ 10 ]
Pharmacometabolomic analyses, through the use of a metabolomics approach, can provide a comprehensive and detailed metabolic profile or “ metabolic fingerprint ” for an individual patient. Such metabolic profiles can provide a complete overview of individual metabolite or pathway alterations, providing a more realistic depiction of disease phenotypes . This approach can then be applied to the prediction of response to a pharmaceutical compound by patients with a particular metabolic profile. [ 2 ] [ 10 ] Pharmacometabolomic analyses of drug response are often coupled or followed up with pharmacogenetics studies. Pharmacogenetics focuses on the identification of genetic variations (e.g. single-nucleotide polymorphisms ) within patients that may contribute to altered drug responses and overall outcome of a certain treatment. The results of pharmacometabolomics analyses can act to “inform” or “direct” pharmacogenetic analyses by correlating aberrant metabolite concentrations or metabolic pathways to potential alterations at the genetic level. [ 11 ] This concept has been established with two seminal publications from studies of antidepressants serotonin reuptake inhibitors [ 11 ] [ 12 ] where metabolic signatures were able to define pathway implicated in response to the antidepressant and that lead to identification of genetic variants within a key gene within highlighted pathway as being implicated in variation in response. These genetic variants were not identified through genetic analysis alone and hence illustrated how metabolomics can guide and inform genetic data.
Although the applications of pharmacometabolomics to personalized medicine are largely only being realized now, the study of an individual's metabolism has been used to treat disease since the Middle Ages. Early physicians employed a primitive form of metabolomic analysis by smelling, tasting and looking at urine to diagnose disease. Obviously the measurement techniques needed to look at specific metabolites were unavailable at that time, but such technologies have evolved dramatically over the last decade to develop precise, high-throughput devices, as well as the accompanying data analysis software to analyze output. Currently, sample purification processes, such as liquid or gas chromatography , are coupled with either mass spectrometry (MS) -based or nuclear magnetic resonance (NMR) -based analytical methods to characterize the metabolite profiles of individual patients. [ 1 ] Continually advancing informatics tools allow for the identification, quantification and classification of metabolites to determine which pathways may influence certain pharmaceutical interventions. [ 1 ] One of the earliest studies discussing the principle and applications of pharmacometabolomics was conducted in an animal model to look at the metabolism of paracetamol and liver damage. NMR spectroscopy was used to analyze the urinary metabolic profiles of rats pre- and post-treatment with paracetamol . The analysis revealed a certain metabolic profile associated with increased liver damage following paracetamol treatment. [ 13 ] At this point, it was eagerly anticipated that such pharmacometabolomics approaches could be applied to personalized human medicine . Since this publication in 2006, the Pharmacometabolomics Research Network led by Duke University researchers and that included partnerships between centers of excellence in metabolomics, pharmacogenomics and informatics (over sixteen academic centers funded by NIGMS) has been able to illustrate for the first time the power of the pharmacometabolomics approach in informing about treatment outcomes in large clinical studies and with use of drugs that include antidepressants, statins, antihypertensives, antiplatelet therapies and antipsychotics. [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 14 ] [ 15 ] [ 16 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ] [ 21 ] [ 22 ] [ 23 ] [ 24 ] [ 25 ] [ 26 ] [ 27 ] [ 28 ] [ 29 ] [ 30 ] [ 31 ] [ 32 ] [ 33 ] [ 34 ] [ 35 ] Totally new concepts emerged from these studies on use of pharmacometabolomics as a tool that can bring a paradigm shift in the field of pharmacology. It illustrated how pharmacometabolomics can enable a Quantitative and Systems Pharmacology approach. [ 2 ] Pharmacometabolomics has been applied for the treatment of numerous human diseases, such as schizophrenia , diabetes , neural disease , depression and cancer . [ 1 ]
As metabolite analyses are being conducted at the individual patient level, pharmacometabolomics may be considered a form of personalized medicine . This field is currently being employed in a predictive manner to determine the potential responses of therapeutic compounds in individual patients, allowing for more customized treatment regimens. It is anticipated that such pharmacometabolomics approaches will lead to the improved ability to predict an individual's response to a compound, the efficacy and metabolism of it as well as adverse or off-target effects that may take place in the body. The metabolism of certain drugs varies from patient to patient as the copy number of the genes which code for common drug metabolizing enzymes varies within the population, and leads to differences in the ability of an individual to metabolize different compounds. [ 36 ] Other important personal factors contributing to an individual's metabolic profile, such as patient nutritional status, commensal bacteria , age, and pre-existing medical conditions, are also reflected in metabolite assessment., [ 5 ] [ 13 ] Overall, pharmacometabolomic analyses combined with such approaches as pharmacogenetics , can function to identify the metabolic processes and particular genetic alterations that may compromise the anticipated efficacy of a drug in a particular patient. The results of such analyses can then allow modification of treatment regimens for an optimal outcome. [ 11 ] [ 12 ] [ 37 ]
Pharmacometabolomics may be used in a predictive manner to determine the correct course of action in regards to a patient about to undergo some type of drug treatment. This involves determining the metabolic profile of a patient prior to treatment, and correlating metabolic signatures with the outcome of a pharmaceutical treatment course. Analysis of a patient's metabolic profile can reveal factors that may contribute to altered drug metabolism , allowing for predictions of the overall efficacy of a proposed treatment, as well as potential drug toxicity risks that may differ from the general population. This approach has been used to identify novel or previously characterized metabolic biomarkers in patients, which can be used to predict the expected outcome of that patient following treatment with a pharmaceutical compound . [ 1 ] [ 37 ] One example of the clinical application of pharmacometabolomics are studies that looked to identify a predictive metabolic marker for the treatment of major depressive disorder (MDD) ., [ 3 ] [ 8 ] [ 11 ] [ 12 ] [ 14 ] In a study with antidepressant Sertraline, the Pharmacometabolomics Network illustrated that metabolic profile at baseline of patients with major depression can inform about treatment outcomes. [ 8 ] In addition the study illustrated the power of metabolomics for defining response to placebo and compared response to placebo to response to sertraline and showed that several pathways were common to both. [ 8 ] In another study with escitalopram citalopram, metabolomic analysis of plasma from patients with MDD revealed that variations in glycine metabolism were negatively associated with patient outcome upon treatment with selective serotonin reuptake inhibitors (SSRIs) , an important drug class involved in the treatment of this disease. [ 11 ] [ 12 ]
The second major application of pharmacometabolomics is the analysis of a patient's metabolic profile following the administration of a specific therapy. This process is often secondary to a pre-treatment metabolic analysis, allowing for the comparison of pre- and post-treatment metabolite concentrations. This allows for the identification of the metabolic processes and pathways that are being altered by the treatment either intentionally as a designated target of the compound, or unintentionally as a side effect . Furthermore, the concentration and variety of metabolites produced from the compound itself can also be identified, providing information on the rate of metabolism and potentially leading to development of a related compound with increased efficacy or decreased side effects . An example of this approach was used to investigate the effect of several antipsychotic drugs on lipid metabolism in patients treated for schizophrenia . [ 20 ] It was hypothesized that these antipsychotic drugs may be altering lipid metabolism in treated patients with schizophrenia , contributing to the weight gain and hypertriglyceridemia . The study monitored lipid metabolites in patients both before and after treatment with antipsychotics . The compiled pre- and post-treatment profiles were then compared to examine the effect of these compounds on lipid metabolism . The researchers found correlations between treatment with antipsychotic drugs and lipid metabolism , in both a lipid-class-specific and drug-specific manner, [ 20 ] establishing new foundations around the concept that pharmacometabolomics provides powerful tools for enabling detailed mapping of drug effects. Additional studies by the Pharmacometabolomics Research Network enabled mapping in ways not possible before effects of statins, [ 4 ] [ 5 ] [ 6 ] [ 17 ] atenolol [ 18 ] and aspirin. [ 7 ] [ 19 ] Totally new insights were gained about effect of these drugs on metabolism and they highlighted pathways implicated in response and side effects.
In order to identify and quantify metabolites produced by the body, various detection methods have been employed. Most often, these involve the use of nuclear magnetic resonance (NMR) spectroscopy or mass spectrometry (MS) , providing universal detection, identification and quantification of metabolites in individual patient samples. Although both processes are used in pharmacometabolomic analyses, there are advantages and disadvantages for using either nuclear magnetic resonance (NMR) spectroscopy - or mass spectrometry (MS) -based platforms in this application.
NMR spectroscopy has been utilized for the analysis of biological samples since the 1980s, and can be used as an effective technique for the identification and quantification of both known and unknown metabolites. For details on the principles of this technique, see NMR spectroscopy . In pharmacometabolomics analyses, NMR is advantageous because minimal sample preparation is required. Isolated patient samples typically include blood or urine due to their minimally-invasive acquisition, however, other fluid types and solid tissue samples have also been studied with this approach. [ 38 ] Due to the minimal preparation of samples before analysis, samples can be potentially fully recovered following NMR analysis (If samples are kept refrigerated to avoid degradation). This permits samples to be repeatedly analysed with extremely high levels of reproducibility, as well as maintaining precious patient samples for an alternative analysis. The high reproducibility and precision of NMR , coupled with relatively fast processing time (greater than 100 samples per day), makes this process a relatively high-throughput form of sample analysis. One disadvantage of this technique is the relatively poor metabolite detection sensitivity compared to MS-based analysis, leading to a requirement for greater initial sample volume. [ 38 ] Furthermore, the initial instrument costs are extremely high, for both NMR and MS equipment. [ 1 ]
An alternative approach to the identification and quantification of patient samples is through the use of mass spectrometry. This approach offers excellent precision and sensitivity in the identification, characterization and quantification of metabolites in multiple patient sample types, such as blood and urine. The mass spectrometry (MS) approach is typically coupled to gas chromatography (GC) , in GC-MS or liquid chromatography (LC) , in LC-MS , which aid in initially separating out the metabolite components within complex sample mixtures, and can allow for the isolation of particular metabolite subsets for analysis. GC-MS can provide relatively precise quantification of metabolites, as well as chemical structural information that can be compared to pre-existing chemical libraries. [ 1 ] GC-MS can be conducted in a relatively high-throughput manner (greater than 100 samples per day) with greater detection sensitivity than NMR analysis. A limitation of GC-MS for this application, however, is that processed metabolite components must be readily volatilized for sample processing.
LC-MS initially separates out the components of a sample mixture based on properties such as hydrophobicity, before processing them for identification and quantification by mass spectrometry (MS) . Overall, LC-MS is an extremely flexible method for processing most compound types in a somewhat high-throughput manner (20-100 samples a day), also with greater sensitivity than NMR analysis. For both GC-MS and LC-MS there are limitations in the reproducibility of metabolite quantification. [ 1 ] Furthermore, sample processing for downstream mass spectrometry (MS) analysis is much more intensive than in NMR application, and results in the destruction of the original sample (via trypsin digestion). [ 1 ]
Following identification and quantification of metabolites in individual patient samples, NMR and mass spectrometry (MS) output is compiled into a dataset. These datasets include information on the identity and levels of individual metabolites detected within processed samples, as well as characteristics of each metabolite during the detection process (e.g. mass-to-charge ratios for mass spectrometry (MS) -based analysis). Multiple datasets can be created and compiled into large databases for individual patients in order to monitor varying metabolic profiles over a treatment course (i.e. pre- and post-treatment profiles). Each database is then processed through a type of informatics platform with software designed to characterize and analyze the data to generate an overall metabolic profile for the patient. To generate this overall profile, computational programs are designed to:
Along with the emerging diagnostic capabilities of pharmacometabolomics, there are limitations introduced when individual variability is looked at. The ability to determine an individual's physiological state by measurement of metabolites is not contested, but the extreme variability that can be introduced by age, nutrition, and commensal organisms suggest problems in creating generalized pharmacometabolomes for patient groups. [ 39 ] However, as long as meaningful metabolic signatures can be elucidated to create baseline values, there still exists a possible means of comparison. [ 10 ]
Issues surrounding the measurement of metabolites in an individual can also arise from the methodology of metabolite detection, and there are arguments both for and against NMR and mass spectrometry (MS) . Other limitations surrounding metabolite analysis include the need for proper handling and processing of samples, as well as proper maintenance and calibration of the analytical and computational equipment. These tasks require skilled and experienced technicians, and potential instrument repair costs due to continuous sample processing can be costly. The cost of the processing and analytical platforms alone is very high, making it difficult for many facilities to afford pharmacometabolomics-based treatment analyses.
Pharmacometabolomics may decrease the burden on the healthcare system by better gauging the correct choice of treatment drug and dosage in order to optimize the response of a patient to a treatment. Hopefully, this approach will also ultimately limit the number of adverse drug reactions (ADRs) associated with many treatment regimens. [ 37 ] Overall, physicians would be better able to apply more personalized, and potentially more effective, treatments to their patients. It is important to consider, however, that the processing and analysis of the patient samples takes time, resulting in delayed treatment.
Another concern about the application of pharmacometabolomics analyses to individual patient care, is deciding who should and who should not receive this in-depth, personalized treatment protocol. Certain diseases and stages of disease would have to be classified according to their requirement of such a treatment plan, but there are no criteria for this classification. Furthermore, not all hospitals and treatment institutes can afford the equipment to process and analyze patient samples on site, but sending out samples takes time and ultimately delays treatment.
Health insurance coverage of such procedures may also be an issue. Certain insurance companies may discriminate against the application of this type of sample analysis and metabolite characterization. Furthermore, there would have to be regulations put in place to ensure that there was no discrimination by insurance companies against the metabolic profiles of individual patients (“high metabolizers” vs. risky “low metabolizers”). | https://en.wikipedia.org/wiki/Pharmacometabolomics |
In medicinal chemistry and molecular biology , a pharmacophore is an abstract description of molecular features that are necessary for molecular recognition of a ligand by a biological macromolecule . IUPAC defines a pharmacophore to be "an ensemble of steric and electronic features that is necessary to ensure the optimal supramolecular interactions with a specific biological target and to trigger (or block) its biological response". [ 1 ] A pharmacophore model explains how structurally diverse ligands can bind to a common receptor site. Furthermore, pharmacophore models can be used to identify through de novo design or virtual screening novel ligands that will bind to the same receptor.
Typical pharmacophore features include hydrophobic centroids, aromatic rings, hydrogen bond acceptors or donors, cations , and anions . These pharmacophore points may be located on the ligand itself or may be projected points presumed to be located in the receptor.
The features need to match different chemical groups with similar properties, in order to identify novel ligands. Ligand-receptor interactions are typically "polar positive", "polar negative" or "hydrophobic". A well-defined pharmacophore model includes both hydrophobic volumes and hydrogen bond vectors.
The process for developing a pharmacophore model generally involves the following steps:
As the biological activities of new molecules become available, the pharmacophore model can be updated to further refine it.
In modern computational chemistry , pharmacophores are used to define the essential features of one or more molecules with the same biological activity . A database of diverse chemical compounds can then be searched for more molecules which share the same features arranged in the same relative orientation. Pharmacophores are also used as the starting point for developing 3D-QSAR models. Such tools and a related concept of "privileged structures", which are "defined as molecular frameworks which are able of providing useful ligands for more than one type of receptor or enzyme target by judicious structural modifications", [ 3 ] aid in drug discovery . [ 4 ]
Historically, the modern idea of pharmacophore was popularized by Lemont Kier , who mentions the concept in 1967 [ 5 ] and uses the term in a publication in 1971. [ 6 ] Nevertheless, F. W. Shueler , in a 1960s book, [ 7 ] uses the expression "pharmacophoric moiety" that corresponds to the modern concept.
The development of the concept is often erroneously accredited to Paul Ehrlich . However neither the alleged source [ 8 ] nor any of his other works mention the term "pharmacophore" or make use of the concept. [ 9 ]
The following computer software packages enable the user to model the pharmacophore using a variety of computational chemistry methods: | https://en.wikipedia.org/wiki/Pharmacophore |
Pharmacotoxicology entails the study of the consequences of toxic exposure to pharmaceutical drugs and agents in the health care field. The field of pharmacotoxicology also involves the treatment and prevention of pharmaceutically induced side effects . Pharmacotoxicology can be separated into two different categories: pharmacodynamics (the effects of a drug on an organism), and pharmacokinetics (the effects of the organism on the drug).
There are many mechanisms by which pharmaceutical drugs can have toxic implications. A very common mechanism is covalent binding of either the drug or its metabolites to specific enzymes or receptor in tissue-specific pathways that then will elicit toxic responses. Covalent binding can occur during both on-target and off-target situations and after biotransformation .
On-target toxicity is also referred to as mechanism-based toxicity. This type of adverse effect that results from pharmaceutical drug exposure is commonly due to interactions of the drug with its intended target. In this case, both the therapeutic and toxic targets are the same. To avoid toxicity during treatment, many times the drug needs to be changed to target a different aspect of the illness or symptoms. Statins are an example of a drug class that can have toxic effects at the therapeutic target ( HMG CoA reductase ). [ 1 ]
Some pharmaceuticals can initiate allergic reactions, as in the case of penicillins . In some people, administration of penicillin can induce production of specific antibodies and initiate an immune response. Activation of this response when unwarranted can cause severe health concerns and prevent proper immune system functioning. [ 1 ] Immune responses to pharmaceutical exposure can be very common in accidental contamination events. Tamoxifen , a selective estrogen receptor modulator , has been shown to alter the humoral adaptive immune response in gilthead seabream. [ 2 ] In this case, pharmaceuticals can produce adverse effects not only in humans, but also in organisms that are unintentionally exposed.
Adverse effects at targets other than those desired for pharmaceutical treatments often occur with drugs that are nonspecific. If a drug can bind to unexpected proteins, receptors, or enzymes that can alter different pathways other than those desired for treatment, severe downstream effects can develop. An example of this is the drug eplerenone (aldosterone receptor antagonist), which should increase aldosterone levels, but has shown to produce atrophy of the prostate. [ 3 ]
Bioactivation is a crucial step in the activity of certain pharmaceuticals. Often, the parent form of the drug is not the active form and it needs to be metabolized in order to produce its therapeutic effects. In other cases, bioactivation is not necessarily needed for drugs to be active and can instead produce reactive intermediates that initiate stronger adverse effects than the original form of the drug. Bioactivation can occur through the action Phase I metabolic enzymes, such as cytochrome P450 or peroxidases . Reactive intermediates can cause a loss of function in some enzymatic pathways or can promote the production of reactive oxygen species , both of which can increase stress levels and alter homeostasis .
Drug-drug interactions can occur when certain drugs are administered at the same time. Effects of this can be additive (outcome is greater than those of one individual drug), less than additive (therapeutic effects are less than those of one individual drug), or functional alterations (one drug changes how another is absorbed, distributed, and metabolized). [ 4 ] Drug-drug interactions can be of serious concern for patients who are undergoing multi-drug therapies. [ 5 ] Coadministration of chloroquine , an anti-malaria drug, and statins for treatment of cardiovascular diseases has been shown to cause inhibition of organic anion-transporting polypeptides (OATPs) and lead to systemic statin exposure. [ 5 ]
There are many different pharmaceutical drugs that can produce adverse effects after biotransformation, interaction with alternate targets, or through drug-drug interactions. All pharmaceuticals can be toxic, depending on the dose. [ 6 ]
Acetaminophen (APAP) is a very common drug used to treat pain. High doses of acetaminophen has been shown to produce severe hepatotoxicity after being biotransformed to produce reactive intermediates. Acetaminophen is metabolized by CYP2E1 to produce NAPQI , which then causes significant oxidative stress due to increased reactive oxygen species (ROS). [ 7 ] ROS can cause cellular damage in a multitude of ways, a few of which being DNA and mitochondrial damage and depletion of antioxidant enzymes such as glutathione . In terms of drug-drug interactions, acetaminophen activates CAR , a nuclear receptor involved in the production of metabolic enzymes, which increases the metabolism of other drugs. This could either cause reactive intermediates/drug activity to persist for longer than necessary, or the drug will be cleared quicker than normal and prevent any therapeutic actions from occurring. Ethanol induces CYP2E1 enzymes in the liver, which can lead to increased NAPQI formation in addition to that formed by acetaminophen. [ 7 ]
Aspirin is an NSAID used to treat inflammation and pain. Overdoses or treatments in conjunction with other NSAIDs can produce additive effects, which can lead to increased oxidative stress and ROS activity. Chronic exposure to aspirin can lead to CNS toxicity and eventually affect respiratory function. [ 8 ]
Anti-depressants have been prescribed since the 1950s, and their prevalence has significantly increased since then. There are many classes of anti-depressant pharmaceuticals, such as selective serotonin reuptake inhibitors (SSRIs), monoamine oxidase inhibitors (MAOIs), and tricyclic anti-depressants . Many of these drugs, especially the SSRIs, function by blocking the metabolism or reuptake of neurotransmitters to treat depression and anxiety. Chronic exposure or overdose of these pharmaceuticals can lead to serotonin and CNS hyperexcitation, weight changes, and, in severe cases, suicide. [ 8 ]
Doxorubicin is a very effective anti-cancer drug that causes congestive heart failure while treating tumors. [ 7 ] Doxorubicin is an uncoupling agent in that it inhibits proper functioning of complex I of the electron transport chain in mitochondria. It then leads to the production of ROS and the inhibition of ATP production. Doxorubicin has been shown to be selectively toxic to cardiac tissue, although some toxicity has been seen in other tissues as well. [ 7 ] Other anti-cancer drugs, such as fluoropyrimidines and taxanes , are extremely effective at treating and reducing tumor proliferation, but have high incidences of cardiac arrhythmias and myocardial infarctions. [ 9 ] | https://en.wikipedia.org/wiki/Pharmacotoxicology |
Pharmacovigilance ( PV , or PhV ), also known as drug safety , is the pharmaceutical science relating to the "collection, detection, assessment, monitoring, and prevention" of adverse effects with pharmaceutical products . [ 1 ] : 7
The etymological roots for the word "pharmacovigilance" are: pharmakon (Greek for drug) and vigilare (Latin for to keep watch). As such, pharmacovigilance heavily focuses on adverse drug reactions (ADR), which are defined as any response to a drug which is noxious and unintended. That definition includes lack of efficacy: that means that the doses normally used for prevention, diagnosis, or treatment of a disease—or, especially in the case of device, for the modification of physiological disorder function. In 2010, the European Union expanded PV to include [ 2 ] medication errors such as overdose, misuse, and abuse of a drug as well as drug exposure during pregnancy and breastfeeding. These are monitored even in the absence of an adverse event, because they may result in an adverse drug reaction. [ 3 ] The US FDA has long considered such criteria to conform to reportable and collectible PV standards.
Patient and healthcare provider reports (via pharmacovigilance agreements or national mandated reporting laws), as well as other sources such as cases reported in medical literature , play a critical role in providing the data necessary for pharmacovigilance to take place. In order to market or to test a pharmaceutical product in most countries, adverse event data received by the license holder (usually a pharmaceutical company) must be submitted to the national drug regulatory authority. ( See Adverse event reporting below.)
Ultimately, pharmacovigilance is concerned with identifying the hazards associated with pharmaceutical products and with minimizing the risk of any harm that may come to patients. Companies must conduct a comprehensive drug safety and pharmacovigilance audit to assess their compliance with local, regional, national, or international laws and regulations. This includes ongoing collection of safety data after a product is approved for marketing. [ 4 ]
Pharmacovigilance uses unique terminology. Below are most of the terms used within this article. They are particular to drug safety, although some are used by other disciplines within the pharmaceutical sciences as well.
The European Medicines Agency defines terms in its Guideline on good pharmacovigilance practices (GVP): [ 5 ]
The activity that is most commonly associated with pharmacovigilance (PV), and which consumes a significant number of resources for drug regulatory authorities (or similar government agencies) and drug safety departments in pharmaceutical companies, is that of adverse event reporting. Adverse event (AE) reporting involves the receipt, triage, data entry, assessment, distribution, reporting (if appropriate), and archiving of AE data and documentation. The source of AE reports may include: spontaneous reports from healthcare professionals or patients (or other intermediaries); solicited reports from patient support programs; reports from clinical or post-marketing studies; reports from literature sources; reports from the media (including social media and websites); and reports reported to drug regulatory authorities themselves. For pharmaceutical companies, AE reporting is a regulatory requirement in most countries. AE reporting also provides data to these companies and drug regulatory authorities that play a key role in assessing the risk-benefit profile of a given drug. The following are several facets of AE reporting:
One of the fundamental principles of adverse event reporting is the determination of what constitutes an individual case safety report. During the triage phase of a potential adverse event report, it is important to determine if the "four elements" of a valid individual case safety report are present: (1) an identifiable patient, (2) an identifiable reporter, (3) a suspect drug, and (4) an adverse event.
If one or more of these four elements is missing, the case is not a valid individual case safety report. Although there are no exceptions to this rule there may be circumstances that may require a judgment call. For example, the term "identifiable" may not always be clear-cut. If a physician reports that he/she has a patient X taking drug Y who experienced Z (an AE), but refuses to provide any specifics about patient X, the report is still a valid case even though the patient is not specifically identified. This is because the reporter has first-hand information about the patient and is identifiable (i.e. a real person) to the physician. Identifiability is important so as not only to prevent duplicate reporting of the same case, but also to permit follow-up for additional information.
The concept of identifiability also applies to the other three elements. Although uncommon, it is not unheard of for fictitious adverse event "cases" to be reported to a company by an anonymous individual (or on behalf of an anonymous patient, disgruntled employee, or former employee) trying to damage the company's reputation or a company's product. In these and all other situations, the source of the report should be ascertained (if possible). But anonymous reporting is also important, as whistle blower protection is not granted in all countries. In general, the drug must also be specifically named. Note that in different countries and regions of the world, drugs are sold under various tradenames. In addition, there are a large number of generics which may be mistaken for the trade product. Finally, there is the problem of counterfeit drugs producing adverse events. If at all possible, it is best to try to obtain the sample which induced the adverse event, and send it to either the European Medicines Agency , FDA or other government agency responsible for investigating AE reports.
If a reporter can't recall the name of the drug they were taking when they experienced an adverse event, this would not be a valid case. This concept also applies to adverse events. If a patient states that they experienced "symptoms", but cannot be more specific, such a report might technically be considered valid, but will be of very limited value to the pharmacovigilance department of the company or to drug regulatory authorities. [ 6 ]
1- Case-control study ( Retrospective study )
2- Prospective study ( Cohort study ).
3- Population statistics. and
4- Intensive event report.
5- The spontaneous report in the case is the population of the single case report. [ 7 ]
Adverse event coding is the process by which information from an AE reporter, called the "verbatim", is coded using standardized terminology from a medical coding dictionary, such as MedDRA (the most commonly used medical coding dictionary). The purpose of medical coding is to convert adverse event information into terminology that can be readily identified and analyzed. For instance, Patient 1 may report that they had experienced "a very bad headache that felt like their head was being hit by a hammer" [Verbatim 1] when taking Drug X. Or, Patient 2 may report that they had experienced a "slight, throbbing headache that occurred daily at about two in the afternoon" [Verbatim 2] while taking Drug Y. Neither Verbatim 1 nor Verbatim 2 will exactly match a code in the MedDRA coding dictionary. However, both quotes describe different manifestations of a headache. As a result, in this example both quotes would be coded as PT Headache (PT = Preferred Term in MedDRA). [ 8 ]
Although somewhat intuitive, there are a set of criteria within pharmacovigilance that are used to distinguish a serious adverse event from a non-serious one. An adverse event is considered serious if it meets one or more of the following criteria:
Aside from death, each of these categories is subject to some interpretation. Life-threatening, as it used in the drug safety world, specifically refers to an adverse event that places the patient at an immediate risk of death , such as cardiac or respiratory arrest. By this definition, events such as myocardial infarction , which would be hypothetically life-threatening, would not be considered life-threatening unless the patient went into cardiac arrest following the MI. Defining what constitutes hospitalization can be problematic as well. Although typically straightforward, it's possible for a hospitalization to occur even if the events being treated are not serious. By the same token, serious events may be treated without hospitalization, such as the treatment of anaphylaxis may be successfully performed with epinephrine. Significant disability and incapacity, as a concept, is also subject to debate. While permanent disability following a stroke would no doubt be serious, would "complete blindness for 30 seconds" be considered "significant disability"? For birth defects, the seriousness of the event is usually not in dispute so much as the attribution of the event to the drug. Finally, "medically significant events" is a category that includes events that may be always serious, or sometimes serious, but will not fulfill any of the other criteria. Events such as cancer might always be considered serious, whereas liver disease, depending on its Common Terminology Criteria for Adverse Events (CTCAE) grade—Grades 1 or 2 are generally considered non-serious and Grades 3-5 may be considered serious. [ 9 ]
This refers to individual case safety reports that involve a serious and unlisted event (an event not described in the drug's labeling) that is considered related to the use of the drug (US FDA). (Spontaneous reports are typically considered to have a positive causality, whereas a clinical trial case will typically be assessed for causality by the clinical trial investigator and/or the license holder.) In most countries, the time frame for reporting expedited cases is 7/15 calendar days from the time a drug company receives notification (referred to as "Day 0") of such a case. Within clinical trials such a case is referred to as a SUSAR (a Suspected Unexpected Serious Adverse Reaction). If the SUSAR involves an event that is life-threatening or fatal, it may be subject to a 7-day "clock". Cases that do not involve a serious, unlisted event may be subject to non-expedited or periodic reporting.
Also known as AE (adverse event) or SAE (serious AE) reporting from clinical trials, safety information from clinical studies is used to establish a drug's safety profile in humans and is a key component that drug regulatory authorities consider in the decision-making as to whether to grant or deny market authorization (market approval) for a drug. AE reporting occurs when study patients (subjects, participants) experience any kind of "untoward" event during the conducting of clinical trials. Non-serious adverse events are typically captured separately at a level lower than pharmacovigilance. AE and SAE information, which may also include relevant information from the patient's medical background, are reviewed and assessed for both causality and degree of seriousness by the study investigator. This information is forwarded to a sponsoring entity (typically a pharmaceutical company or academic medical center) that is responsible for the reporting of this information, as appropriate, to drug regulatory authorities.
Spontaneous reports are termed spontaneous as they take place during the clinician's normal diagnostic appraisal of a patient, when the clinician is drawing the conclusion that the drug may be implicated in the causality of the event.
Spontaneous reporting system (SRS) relies on vigilant physicians and other healthcare professionals who not only generate a suspicion of an adverse drug reaction, but also report it. It is an important source of regulatory actions such as taking a drug off the market or a label change due to safety problems. Spontaneous reporting is the core data-generating system of international pharmacovigilance, relying on healthcare professionals (and in some countries consumers) to identify and report any adverse events to their national pharmacovigilance center, health authority (such as the European Medicines Agency or FDA), or to the drug manufacturer itself. [ 10 ] Spontaneous reports are, by definition, submitted voluntarily although under certain circumstances these reports may be encouraged, or "stimulated", by media reports or articles published in medical or scientific publications, or by product lawsuits. In many parts of the world adverse event reports are submitted electronically using a defined message standard. [ 11 ] [ 12 ]
One of the major weaknesses of spontaneous reporting is that of under-reporting, where, unlike in clinical trials, less than 100% of those adverse events occurring are reported. Further complicating the assessment of adverse events, AE reporting behavior varies greatly between countries and in relation to the seriousness of the events, but in general probably less than 10% (some studies suggest less than 5%) of all adverse events that occur are actually reported. The rule-of-thumb is that on a scale of 0 to 10, with 0 being least likely to be reported and 10 being the most likely to be reported, an uncomplicated non-serious event such as a mild headache will be closer to a "0" on this scale, whereas a life-threatening or fatal event will be closer to a "10" in terms of its likelihood of being reported. In view of this, medical personnel may not always see AE reporting as a priority, especially if the symptoms are not serious. And even if the symptoms are serious, the symptoms may not be recognized as a possible side effect of a particular drug or combination thereof. In addition, medical personnel may not feel compelled to report events that are viewed as expected. This is why reports from patients themselves are of high value. The confirmation of these events by a healthcare professional is typically considered to increase the value of these reports. Hence it is important not only for the patient to report the AE to his health care provider (who may neglect to report the AE), but also report the AE to both the biopharmaceutical company and the FDA, European Medicines Agency, ... This is especially important when one has obtained one's pharmaceutical from a compounding pharmacy.
As such, spontaneous reports are a crucial element in the worldwide enterprise of pharmacovigilance and form the core of the World Health Organization Database, which includes around 4.6 million reports (January 2009), [ 13 ] growing annually by about 250,000. [ 14 ]
Aggregate reporting, also known as periodic reporting, plays a key role in the safety assessment of drugs. Aggregate reporting involves the compilation of safety data for a drug over a prolonged period of time (months or years), as opposed to single-case reporting which, by definition, involves only individual AE reports. The advantage of aggregate reporting is that it provides a broader view of the safety profile of a drug. Worldwide, the most important aggregate report is the Periodic Safety Update Report (PSUR) and Development Safety Update Report (DSUR). This is a document that is submitted to drug regulatory agencies in Europe, the US and Japan (ICH countries), as well as other countries around the world. The PSUR was updated in 2012 and is now referred to in many countries as the Periodic Benefit Risk Evaluation report (PBRER). As the title suggests, the PBRER's focus is on the benefit-risk profile of the drug, which includes a review of relevant safety data compiled for a drug product since its development.
Some countries legally oblige spontaneous reporting by physicians. In most countries, manufacturers are required to submit, through its Qualified Person for Pharmacovigilance (QPPV), all of the reports they receive from healthcare providers to the national authority. Others have intensive, focused programmes concentrating on new drugs, or on controversial drugs, or on the prescribing habits of groups of doctors, or involving pharmacists in reporting. All of these generate potentially useful information. Such intensive schemes, however, tend to be the exception. A number of countries have reporting requirements or reporting systems specific to vaccine-related events. [ 15 ]
Risk management is the discipline within pharmacovigilance that is responsible for signal detection and the monitoring of the risk-benefit profile of drugs. Other key activities within the area of risk management are that of the compilation of risk management plans (RMPs) and aggregate reports such as the Periodic Safety Update Report (PSUR), Periodic Benefit-Risk Evaluation Report (PBRER), and the Development Safety Update Report (DSUR).
One of the most important, and challenging, problems in pharmacovigilance is that of the determination of causality. Causality refers to the relationship of a given adverse event to a specific drug. Causality determination (or assessment) is often difficult because of the lack of clear-cut or reliable data. While one may assume that a positive temporal relationship might "prove" a positive causal relationship, this is not always the case. Indeed, a "bee sting" Adverse Event — where the AE can clearly be attributed to a specific cause — is by far the exception rather than the rule. This is due to the complexity of human physiology as well as that of disease and illnesses. By this reckoning, in order to determine causality between an adverse event and a drug, one must first exclude the possibility that there were other possible causes or contributing factors. If the patient is on a number of medications, it may be the combination of these drugs which causes the AE, and not any one individually. There have been a number of recent high-profile cases where the AE led to the death of an individual. The individuals were not overdosed with any one of the many medications they were taking, but the combination there appeared to cause the AE. Hence it is important to include in your/one's AE report, not only the drug being reported, but also all other drugs the patient was also taking.
For instance, if a patient were to start Drug X and then three days later were to develop an AE, one might be tempted to attribute blame Drug X. However, before that can be done, the patient's medical history would need to be reviewed to look for possible risk factors for the AE. In other words, did the AE occur with the drug or because of the drug? This is because a patient on any drug may develop or be diagnosed with a condition that could not have possibly been caused by the drug. This is especially true for diseases, such as cancer, which develop over an extended period of time, being diagnosed in a patient who has been taken a drug for a relatively short period of time. On the other hand, certain adverse events, such as blood clots (thrombosis), can occur with certain drugs with only short-term exposure. Nevertheless, the determination of risk factors is an important step of confirming or ruling-out a causal relationship between an event and a drug.
Often the only way to confirm the existence of a causal relationship of an event to a drug is to conduct an observational study where the incidence of the event in a patient population taking the drug is compared to a control group. This may be necessary to determine if the background incidence of an event is less than that found in a group taking a drug. If the incidence of an event is statistically significantly higher in the "active" group versus the placebo group (or other control group), it is possible that a causal relationship may exist to a drug, unless other confounding factors may exist.
Signal detection involves a range of techniques (CIOMS VIII [ 16 ] ). The WHO defines a safety signal as: "Reported information on a possible causal relationship between an adverse event and a drug, the relationship being unknown or incompletely documented previously". Usually more than a single report is required to generate a signal, depending upon the event and quality of the information available.
Data mining pharmacovigilance databases is one approach that has become increasingly popular with the availability of extensive data sources and inexpensive computing resources. The data sources (databases) may be owned by a pharmaceutical company, a drug regulatory authority, or a large healthcare provider. Individual case safety reports in these databases are retrieved and converted into structured format, and statistical methods (usually a smathematical algorithm) are applied to calculate statistical measures of association. If the statistical measure crosses an arbitrarily set threshold, a signal is declared for a given drug associated with a given adverse event. All signals deemed worthy of investigation, require further analysis using all available data in an attempt to confirm or refute the signal. If the analysis is inconclusive, additional data may be needed such as a post-marketing observational trial.
Signal detection is an essential part of drug use and safety surveillance. Ideally, the goal of signal detection is to identify adverse drug reactions that were previously considered unexpected and to be able to provide guidance in the product's labeling as to how to minimize the risk of using the drug in a given patient population.
A risk management plan is a documented plan that describes the risks (adverse drug reactions and potential adverse reactions) associated with the use of a drug and how they are being handled (warning on drug label or on packet inserts of possible side effects which if observed should cause the patient to inform/see his physician and/or pharmacist and/or the manufacturer of the drug and/or the FDA , European Medicines Agency. The overall goal of a risk management plan is to assure a positive risk-benefit profile once the drug is (has been) marketed. The document is required to be submitted, in a specified format, with all new market authorization requests within the European Union (EU). Although not necessarily required, risk management plans may also be submitted in countries outside the EU. The risks described in a risk management plan fall into one of three categories: identified risks, potential risks, and unknown risks. Also described within a risk management plan are the measures that the Market Authorization Holder, usually a pharmaceutical company, will undertake to minimize the risks associated with the use of the drug. These measures are usually focused on the product's labeling and healthcare professionals. Indeed, the risks that are documented in a pre-authorization risk management plan will inevitably become part of the product's post-marketing labeling. Since a drug, once authorized, may be used in ways not originally studied in clinical trials, this potential " off-label use ", and its associated risks, is also described within the risk management plan. Risk management plans can be very lengthy documents, running in some cases hundreds of pages and, in rare instances, up to a thousand pages long.
In the US, under certain circumstances, the FDA may require a company to submit a document called a Risk Evaluation and Mitigation Strategy (REMS) for a drug that has a specific risk that FDA believes requires mitigation. While not as comprehensive as a risk management plan, a Risk Evaluation and Mitigation Strategy can require a sponsor to perform certain activities or to follow a protocol, referred to as Elements to Assure Safe Use, [ 17 ] to assure that a positive risk-benefit profile for the drug is maintained for the circumstances under which the product is marketed.
Pharmaceutical companies are required by law in most countries to perform clinical trials , testing new drugs on people before they are made generally available. This occurs after a drug has been pre-screened for toxicity, sometimes using animals for testing. The manufacturers or their agents usually select a representative sample of patients for whom the drug is designed – at most a few thousand – along with a comparable control group. The control group may receive a placebo and/or another drug, often a so-called "gold standard" that is "best" drug marketed for the disease.
The purpose of clinical trials is to determine:
Clinical trials do, in general, tell a good deal about how well a drug works. They provide information that should be reliable for larger populations with the same characteristics as the trial group – age, gender, state of health, ethnic origin, and so on though target clinical populations are typically very different from trial populations with respect to such characteristics [ citation needed ] .
The variables in a clinical trial are specified and controlled, but a clinical trial can never tell you the whole story of the effects of a drug in all situations. In fact, nothing could tell you the whole story, but a clinical trial must tell you enough; "enough" being determined by legislation and by contemporary judgements about the acceptable balance of benefit and harm. Ultimately, when a drug is marketed it may be used in patient populations that were not studied during clinical trials (children, the elderly, pregnant women, patients with co-morbidities not found in the clinical trial population, etc.) and a different set of warnings, precautions or contraindications (where the drug should not be used at all) for the product's labeling may be necessary in order to maintain a positive risk/benefit profile in all known populations using the drug.
Pharmacoepidemiology is the study of the incidence of adverse drug reactions in patient populations using drug agents. [ 18 ]
Although often used interchangeably, there are subtle differences between the two disciplines. Pharmacogenetics is generally regarded as the study or clinical testing of genetic variation that gives rise to differing responses to drugs, including adverse drug reactions. It is hoped that pharmacogenetics will eventually provide information as to which genetic profiles in patients will place those patients at greatest risk, or provide the greatest benefit, for using a particular drug or drugs. Pharmacogenomics , on the other hand, is the broader application of genomic technologies to new drug discovery and further characterization of older drugs.
The following organizations play a key collaborative role in the global oversight of pharmacovigilance.
The principle of international collaboration in the field of pharmacovigilance is the basis for the WHO Programme for International Drug Monitoring, through which over 150 member nations have systems in place that encourage healthcare personnel to record and report adverse effects of drugs in their patients. [ 19 ] These reports are assessed locally and may lead to action within the country. Since 1978, the programme has been managed by the Uppsala Monitoring Centre to which member countries send their reports to be processed, evaluated and entered into an international database called VigiBase . Membership in the WHO Programme enables a country to know if similar reports are being made elsewhere. [ 20 ] When there are several reports of adverse reactions to a particular drug, this process may lead to the detection of a signal, and an alert about a possible hazard communicated to members countries after detailed evaluation and expert review on the biological stasis and homeostasis of the body.
Clb12/2020001
The International Council for Harmonisation is a global organization with members from the European Union, the United States and Japan; its goal is to recommend global standards for drug companies and drug regulatory authorities around the world, with its activities overseen by the Steering Committee overseeing harmonization activities. [ 21 ] Established in 1990, each of its six co-sponsors—the EU, the European Federation of Pharmaceutical Industries and Associations, Japan's Ministry of Health, Labor and Welfare, the Japanese Pharmaceutical Manufacturers Association, the U.S. Food and Drug Administration (FDA), and the Pharmaceutical Research and Manufacturers of America (PhRMA)—have two seats on the SC. Other parties have a significant interest in the International Council for Harmonisation and have been invited to nominate Observers to the SC; three current observers [ when? ] are the WHO, Health Canada , and the European Free Trade Association , with the International Federation of Pharmaceutical Manufacturers Association participating as a non-voting member of the SC. [ 22 ] [ 23 ]
The CIOMS, a part of the WHO, is globally oriented think tank that provides guidance on drug safety related topics through its Working Groups. [ citation needed ] The CIOMS prepares reports that are used as a reference for developing future drug regulatory policy and procedures, and over the years, many of CIOMS' proposed policies have been adopted. [ citation needed ] Examples of topics these reports have covered include: Current Challenges in Pharmacovigilance: Pragmatic Approaches (CIOMS V); Management of Safety Information from Clinical Trials (CIOMS VI); the Development Safety Update Report (DSUR): Harmonizing the Format and Content for Periodic Safety Reporting During Clinical Trials (CIOMS VII); and Practical Aspects of Signal Detection in Pharmacovigilance: Report of CIOMS Working Group (CIOMS VIII). [ citation needed ]
The International Society of Pharmacovigilance is an international non-profit scientific organization, which aims to foster pharmacovigilance both scientifically and educationally, and enhance all aspects of the safe and proper use of medicines, in all countries. [ 24 ] It was established in 1992 as the European Society of Pharmacovigilance. [ 25 ]
Society of Pharmacovigilance, India , also established in 1992, is partner member of the International Society of Pharmacovigilance. Other local societies include the Boston Society of Pharmacovigilance Physicians. [ 26 ]
Drug regulatory authorities play a key role in national or regional oversight of pharmacovigilance. Some of the agencies involved are listed below (in order of 2011 spending on pharmaceuticals, from the IMS Institute for Healthcare Informatics). [ 27 ] [ why? ]
The "pharmerging", or emerging pharmaceutical market economies, which include Brazil, India, Russia, Argentina, Egypt, Indonesia, Mexico, Pakistan, Poland, Romania, South Africa, Thailand, Turkey, Ukraine and Vietnam, accrued one fifth of global 2011 pharmaceutical expenditures; in future, aggregated data for this set will include China as well. [ 27 ]
In Egypt, pharmacovigilance is regulated by the Egyptian Pharmacovigilance Center of the Egyptian Ministry of Health. [ citation needed ]
In Kenya, pharmacovigilance is regulated by the Pharmacy and Poisons Board , which provides a Pharmacovigilance Electronic Reporting System which allows for the online reporting of suspected adverse drug reactions as well as suspected poor quality of medicinal products. [ 28 ] The pharmacovigilance activities in Kenya are supported by the School of Pharmacy, University of Nairobi through its Master of Pharmacy in Pharmacoepidemiology & Pharmacovigilance program offered by the Department of Pharmacology and Pharmacognosy. [ 29 ]
In Uganda, pharmacovigilance is regulated by the National Drug Authority . [ citation needed ]
In Canada, with ~2% of all global 2006 and 2011 pharmaceutical expenditures, [ 27 ] pharmacovigilance is regulated by the Marketed Health Products Directorate of the Health Products and Food Branch . [ 30 ] Canada was second, following the United States, in holding the highest total prescription drug expenditures per capita in 2011 at around 750 US dollars per person. Canada also pays such a large amount [ compared to? ] for pharmaceuticals that it was second, next to Switzerland, for the amount of money spent for a certain amount of prescription drugs (around 130 US dollars). [ clarification needed ] It was also accessed that Canada was one of the top countries that increased its yearly average per capita growth on pharmaceutical expenditures the most from 2000 to 2010 with 4 percent a year (with taking inflation into account) [ 31 ] The Marketed Health Products Directorate mainly collects adverse drug reaction reports through a network of reporting centers to analyze and issue possible warnings to the public, and currently utilizes newsletters, advisories, adverse reaction centers, as well as electronic mailing lists. However, it does not currently maintain a database or list of drugs removed from Canada as a result of safety concerns. [ 32 ] In August 2017, there was a government controversy in which a bill, known as "Vanessa's Law", to protect patients from potentially dangerous prescription drugs was not being fully realized by hospitals; Health Canada only [ weasel words ] required hospitals to report "unexpected" negative reactions to prescription drugs, rather than any and all adverse reactions, with the justification of managing "administrative overload". [ 33 ]
According BioPharm International, as of April 2013 "there is no Latin American equivalent of the European Medicines Agency—no common body with the power to facilitate greater consistency across countries". [ 34 ] For simplicity, and per sources, 17 smaller economies are discussed alongside the 4 pharmemerging large economies of Argentina, Brazil, Mexico and Venezuela—Bolivia, Chile, Colombia, Costa Rica, Cuba, Dominican Republic, Ecuador, El Salvador, Guatemala, Haiti, Honduras, Nicaragua, Panama, Paraguay, Peru, Suriname, and Uruguay. [ 35 ] As of June 2012, 16 of this total of 21 countries have systems for immediate reporting and 9 have systems for periodic reporting of adverse events for on-market agents, while 10 and 8, respectively, have systems for immediate and periodic reporting of adverse events during clinical trials; most of these have pharmacovigilance requirements that rank as "high or medium...in line with international standards" ( ibid. ). [ full citation needed ] The WHO's Pan American Network for Drug Regulatory Harmonization [ 36 ] seeks to assist Latin American countries in develop harmonized pharmacovigilance regulations. [ 35 ]
In the U.S., with about a third of all global 2011 pharmaceutical expenditures, [ 27 ] the drug industry is regulated by the Food and Drug Administration , the largest national drug regulatory authority in the world. [ citation needed ] FDA authority is exercised through enforcement of regulations derived from legislation, as published in the U.S. Code of Federal Regulations (CFR); the principal drug safety regulations are found in 21 CFR Part 312 (IND regulations) and 21 CFR Part 314 (NDA regulations). [ citation needed ] While those regulatory efforts address pre-marketing concerns, pharmaceutical manufacturers and academic/non-profit organizations such as Research on Adverse Drug events And Reports (RADAR) and Public Citizen do play a role in pharmacovigilance in the US. [ citation needed ] The post-legislative rule-making process of the U.S. federal government provides for significant input from both the legislative and executive branches, which also play specific, distinct roles in determining FDA policy. [ citation needed ]
The law on pharmacovigilance in Azerbaijan was revised and implemented as part of the "Regulation of Pharmacovigilance for Medicinal Products" in 2019. This regulation was developed to establish state control over the effectiveness and safety of medicinal products. It outlines measures to detect, evaluate, and prevent adverse reactions and other undesirable effects of medicinal products, applying to marketing authorization holders and all health institutions in Azerbaijan
In Azerbaijan, the Ministry of Healthcare and other relevant state authorities play a crucial role in the functioning of the pharmacovigilance system. These organizations implement various regulatory and oversight mechanisms to ensure drug safety.
Worldwide company Pharmcontrol [ 37 ] offers a full range of pharmacovigilance services in Azerbaijan to ensure the safety and effectiveness of medicines on the market. [ relevant? ] With a team of highly qualified, certified pharmacists, Pharmcontrol ensures the effective monitoring and management of drug safety. The company aligns with international practices and standards, helping to elevate the country's drug safety levels.
The development of pharmacovigilance in Azerbaijan aims to increase public awareness about the safe use of medicines and improve the overall quality of the healthcare system. The integration of international standards and best practices in pharmacovigilance, spearheaded by companies like Pharmcontrol, contributes significantly to this goal, ensuring that the safety of patients is always a top priority.
China's economy is anticipated to pass Japan to become second in the ranking of individual countries' in pharmaceutical purchases by 2015, and so its pharmacovigilance regulation will become increasing important; China's regulation of pharmacovigilance is through its National Center for Adverse Drug Reaction Monitoring, under China's Ministry of Health . [ 38 ]
In India, the pharmacovigilance regulatory authority is the Indian Pharmacopoeia Commission , with a National Coordination Centre under the Pharmacovigilance Program of India (PvPI), in the Ministry of Health and Family Welfare. [ 39 ] [ 40 ] Scientists working on pharmacovigilance share their experiences, findings, innovative ideas and researches during the annual meeting of Society of Pharmacovigilance, India . [ citation needed ]
In Iraq, pharmacovigilance is regulated by the Iraqi Pharmacovigilance Center of the Iraqi Ministry of Health . [ 27 ] [ citation needed ]
In Japan, with ~12% of all global 2011 pharmaceutical expenditures, [ 27 ] pharmacovigilance matters are regulated by the Pharmaceuticals and Medical Devices Agency and the Ministry of Health, Labour, and Welfare . [ citation needed ]
The Republic of Korea , with ~1% of all global 2011 pharmaceutical expenditures, [ 27 ] pharmacovigilance matters are regulated in South Korea by the Ministry of Food and Drug Safety . [ citation needed ]
The European "Big Four" (France, Germany, Italy and the United Kingdom), along with Spain, accrued ~17% of global 2011 pharmaceutical expenditures. [ 27 ] The remaining EU and non-EU countries outside of France, Germany, Italy, the United Kingdom and Spain accrued ~7% of global 2011 pharmaceutical expenditures. [ 27 ] Regulation of those outside the EU being managed by specific governmental agencies.
Pharmacovigilance efforts in the European Union are coordinated by the European Medicines Agency and are conducted by the national competent authorities (NCAs). [ citation needed ] The main responsibility of the European Medicines Agency is to maintain and develop the pharmacovigilance database consisting of all suspected serious adverse reactions to medicines observed in the European Community ; the data processing network and management system is called EudraVigilance and contains separate but similar databases of human and veterinary reactions. [ 41 ] The European Medicines Agency requires the individual marketing authorization holders to submit all received adverse reactions in electronic form, except in exceptional circumstances; the reporting obligations of the various stakeholders are defined by EEC [ clarification needed ] legislation, namely regulation (EC) No 726/2004, and for human medicines, European Union Directive 2001/83/EC as amended and Directive 2001/20/EC . [ citation needed ] In 2002, Heads of Medicines Agencies [ 42 ] agreed on a mandate for an ad hoc working group on establishing a European risk management strategy; the working group considered the conduct of a high level survey of EU pharmacovigilance resources to promote the utilization of expertise and encourage collaborative working. [ citation needed ] In conjunction with this oversight, individual countries maintain their distinct regulatory agencies with pharmacovigilance responsibility. [ 43 ] Good Pharmacovigilance Practices (GVP) is a set of set of guidelines that apply to the EU member states. [ 44 ]
In Spain, pharmacovigilance is regulated by the Spanish Agency of Medicines and Medical Devices , which can suspend or withdraw the authorization of pharmaceuticals already on-market if the evidence shows that safety (or quality or efficacy) of an agent are unsatisfactory. [ 45 ]
In Switzerland, pharmacovigilance "inspections" for clinical trials of medicinal products are conducted by the Swiss Agency for Therapeutic Products (Swissmedic). [ 46 ]
Despite attention from the FDA and regulatory agencies of the European Union, procedures for monitoring drug concentrations and adverse effects in the environment are lacking. [ citation needed ] Pharmaceuticals, their metabolites, and related substances may enter the environment after patient excretion, after direct release to waste streams during manufacturing or administration, or via terrestrial deposits (e.g., from waste sludges or leachates ). [ 47 ] A concept combining pharmacovigilance and environmental pharmacology, intended to focus attention on this area, was introduced first as pharmacoenvironmentology in 2006 by Syed Ziaur Rahman and later as ecopharmacology with further concurrent and later terms for the same concept (ecopharmacovigilance, environmental pharmacology, ecopharmacostewardship). [ 47 ] [ 48 ] [ 49 ] [ 50 ]
The first of these routes to the environment, elimination through living organisms subsequent to pharmacotherapy, is suggested as the principal source of environmental contamination (apart from cases where norms for treatment of manufacturing and other wastes are violated), and ecopharmacovigilance is intended to deal specifically with this impact of pharmacological agents on the environment. [ 47 ] [ 51 ]
Activities of ecopharmacovigilance have been suggested to include:
A medical device is an instrument, apparatus, implant, in vitro reagent, or similar or related article that is used to diagnose, prevent, or treat disease or other conditions, and does not achieve its purposes through chemical action within or on the body (which would make it a drug ). Whereas medicinal products (also called pharmaceuticals) achieve their principal action by pharmacological, metabolic or immunological means, medical devices act by physical, mechanical, or thermal means. Medical devices vary greatly in complexity and application. Examples range from simple devices such as tongue depressors , medical thermometers , and disposable gloves to advanced devices such as medical robots , cardiac pacemakers , and neuroprosthetics . This modern concept of monitoring and safety of medical devices which is known materiovigilance was quite documented in Unani System of medicine. [ 52 ]
Given the inherent difference between medicinal products and medical products, the vigilance of medical devices is also different from that of medicinal products. To reflect this difference, a classification system has been adopted in some countries to stratify the risk of failure with the different classes of devices. The classes of devices typically run on a 1-3 or 1-4 scale, with Class 1 being the least likely to cause significant harm with device failure versus Classes 3 or 4 being the most likely to cause significant harm with device failure. An example of a device in the "low risk" category would be contact lenses. An example of a device in the "high risk" category would be cardiac pacemakers.
Medical device reporting (MDR), which is the reporting of adverse events with medical devices, is similar to that with medicinal products, although there are differences. In contrast to reporting of medical products reports of side-effects play only a minor role with most medical devices. The vast majority of the medical device reports are related to medical device defects or failures. Other notable differences are in the obligations to report by other actors that aren't manufacturers, in the US user-facilities such as hospitals and nursing homes are legally required to report suspected medical device-related deaths to both FDA and the manufacturer, if known, and serious injuries to the manufacturer or to FDA, if the manufacturer is unknown. [ 53 ] This is in contrast to the voluntary reporting of AEs with medicinal products. Similar obligations exist in multiple European countries. The European regulation on medical devices [ 54 ] and the European regulation on in vitro diagnostic medical devices ( IVDR ) [ 55 ] obliges other economic operators most notably importers and distributors to inform manufacturers, and in certain instances the authorities, of incidents and safety issues with medical devices that they have distributed or imported in the European market.
The safety of herbal medicines has become a major concern to both national health authorities and the general public. [ 56 ] [ full citation needed ] The use of herbs as traditional medicines continues to expand rapidly [ vague ] across the world; many people [ vague ] now take herbal medicines or herbal products for their health care in different national health-care settings. [ vague ] [ citation needed ] However, mass media reports [ which? ] of adverse events with herbal medicines can be incomplete and therefore misleading. [ citation needed ] Moreover, it can be difficult to identify the causes of herbal medicine-associated adverse events since the amount of data on each event is generally less than for pharmaceuticals formally regulated as drugs (since the requirements for adverse event reporting are either non-existent or are less stringent for herbal supplements and medications). [ 57 ]
With the emergence of advanced artificial intelligence methods and social media big data, researchers are now using publicly posted social media data to discover unknown side effects of prescription medications. [ 58 ] Natural language processing and machine learning methods are developed and used for identifying non-standard expressions of side effects.
Boston Society of Pharmacovigilance Physicians. [ 26 ]
60. AI Tools Used for Pharmacovigilance Retrieved December 10, 2024. | https://en.wikipedia.org/wiki/Pharmacovigilance |
The Pharmacovigilance Programme of India ( PvPI ) is an Indian government organization which identifies and responds to drug safety problems. [ 1 ] Its activities include receiving reports of adverse drug events and taking necessary action to remedy problems. [ 1 ] The Central Drugs Standard Control Organisation established the program in July 2010 [ 1 ] [ 2 ] with All India Institute of Medical Sciences, New Delhi as the National Coordination Centre , which later shifted to Indian Pharmacopoeia Commission in Ghaziabad on 15 April 2011. [ 1 ] [ 2 ]
Many developed countries set up their pharmacovigilance programs following the Thalidomide scandal in the 1960s. [ 2 ] India set up its program in the 1980s. [ 2 ] This general concept of drug safety monitoring went through different forms, but the Central Drugs Standard Control Organisation established the present Pharmacovigilance Program of India in 2010. [ 2 ] Now the program is well integrated with government legislation, a regulator as leader, and a research center as part of the Indian Pharmacopoeia Commission . [ 2 ]
As of 2018 there were 250 centers around India capable of responding to reports of serious adverse reactions . [ 2 ] One of the challenges of the organization is training doctors and hospitals to report adverse drug reactions when patients have them. [ 3 ] The Pharmacovigilance Program makes these reports itself, but ideally, such reports could originate from any clinic. [ 3 ] The Pharmacovigilance Programme seeks to encourage a culture and social expectation of reporting drug problems. [ 3 ]
One of the successes of the program was detecting adverse effects of people in India using carbamazepine . [ 3 ] [ 4 ] While this drug is safer among people native to the Europe, people of South Asia have different genetics and are more likely to experience problems when using it. [ 3 ] [ 4 ] Other countries could not have been able to detect this problem, and the Pharmacovigilance Programme's detection of it was a success story. [ 3 ] [ 4 ]
The establishment of the Pharmacovigilance Program made India a more attractive international destination for foreign companies to bring clinical trials research. [ 5 ] Understanding the quality of India's pharmacovigilance programme is key to international researchers conducting trials in India. [ 6 ]
The program collaborates both in India and internationally with the World Health Organization on projects for safe medication. [ 7 ] [ 2 ] As a collaborating center, the Pharmacovigilance Programme assists the WHO in developing international policy for other countries to manage their own drug safety programs. [ 7 ]
While the United States and Europe have pharmacovigilance systems which are developed well in some ways, the Indian programme has more and specialized expertise to apply for the unique circumstances of India. [ 8 ] The Pharmaceutical industry in India produces more drugs than any other national industry. [ 8 ] Because of the large amount of drugs and the many countries which import them, the Indian program monitors in some ways more than anywhere else. [ 8 ] | https://en.wikipedia.org/wiki/Pharmacovigilance_Programme_of_India |
Pharmacy is the science and practice of discovering, producing, preparing, dispensing, reviewing and monitoring medications , aiming to ensure the safe, effective, and affordable use of medicines . It is a miscellaneous science as it links health sciences with pharmaceutical sciences and natural sciences . The professional practice is becoming more clinically oriented as most of the drugs are now manufactured by pharmaceutical industries. Based on the setting, pharmacy practice is either classified as community or institutional pharmacy. Providing direct patient care in the community of institutional pharmacies is considered clinical pharmacy . [ 1 ]
The scope of pharmacy practice includes more traditional roles such as compounding and dispensing of medications. It also includes more modern services related to health care including clinical services, reviewing medications for safety and efficacy, and providing drug information with patient counselling. Pharmacists , therefore, are experts on drug therapy and are the primary health professionals who optimize the use of medication for the benefit of the patients.
An establishment in which pharmacy (in the first sense) is practiced is called a pharmacy (this term is more common in the United States) or chemists (which is more common in Great Britain, though pharmacy is also used). [ citation needed ] In the United States and Canada, drugstores commonly sell medicines, as well as miscellaneous items such as confectionery , cosmetics , office supplies , toys , hair care products and magazines , and occasionally refreshments and groceries.
In its investigation of herbal and chemical ingredients, the work of the apothecary may be regarded as a precursor of the modern sciences of chemistry and pharmacology , prior to the formulation of the scientific method . [ citation needed ]
The field of pharmacy can generally be divided into various disciplines:
The boundaries between these disciplines and with other sciences, such as biochemistry, are not always clear-cut.
Often, collaborative teams from various disciplines (pharmacists and other scientists) work together toward the introduction of new therapeutics and methods for patient care. However, pharmacy is not a basic or biomedical science in its typical form. Medicinal chemistry is also a distinct branch of synthetic chemistry combining pharmacology, organic chemistry, and chemical biology.
Pharmacology is sometimes considered the fourth discipline of pharmacy. Although pharmacology is essential to the study of pharmacy, it is not specific to pharmacy. Both disciplines are distinct. Those who wish to practice both pharmacy (patient-oriented) and pharmacology (a biomedical science requiring the scientific method) receive separate training and degrees unique to either discipline.
Pharmacoinformatics is considered another new discipline, for systematic drug discovery and development with efficiency and safety.
Pharmacogenomics is the study of genetic-linked variants that effect patient clinical responses, allergies, and metabolism of drugs. [ 2 ]
The World Health Organization estimates that there are at least 2.6 million pharmacists and other pharmaceutical personnel worldwide. [ 3 ]
Pharmacists are healthcare professionals with specialized education and training who perform various roles to ensure optimal health outcomes for their patients through the quality use of medicines. Pharmacists may also be small business proprietors, owning the pharmacy in which they practice. Since pharmacists know about the mode of action of a particular drug, and its metabolism and physiological effects on the human body in great detail, they play an important role in optimization of drug treatment for an individual.
Pharmacists are represented internationally by the International Pharmaceutical Federation (FIP), an NGO linked with World Health Organization (WHO). They are represented at the national level by professional organisations such as the Royal Pharmaceutical Society in the UK, Pharmaceutical Society of Australia (PSA), Canadian Pharmacists Association (CPhA), Indian Pharmacist Association (IPA), Pakistan Pharmacists Association (PPA), American Pharmacists Association (APhA), and the Malaysian Pharmaceutical Society (MPS). [ 4 ]
In some cases, the representative body is also the registering body, which is responsible for the regulation and ethics of the profession.
In the United States, specializations in pharmacy practice recognized by the Board of Pharmacy Specialties include: cardiovascular, infectious disease , oncology , pharmacotherapy, nuclear, nutrition , and psychiatry . [ 5 ] The Commission for Certification in Geriatric Pharmacy certifies pharmacists in geriatric pharmacy practice. The American Board of Applied Toxicology certifies pharmacists and other medical professionals in applied toxicology .
Pharmacy technicians support the work of pharmacists and other health professionals by performing a variety of pharmacy-related functions, including dispensing prescription drugs and other medical devices to patients and instructing on their use. They may also perform administrative duties in pharmaceutical practice, such as reviewing prescription requests with medic's offices and insurance companies to ensure correct medications are provided and payment is received.
Legislation requires the supervision of certain pharmacy technician's activities by a pharmacist. The majority of pharmacy technicians work in community pharmacies . In hospital pharmacies, pharmacy technicians may be managed by other senior pharmacy technicians. In the UK the role of a PhT in hospital pharmacy has grown and responsibility has been passed on to them to manage the pharmacy department and specialized areas in pharmacy practice allowing pharmacists the time to specialize in their expert field as medication consultants spending more time working with patients and in research. Pharmacy technicians are registered with the General Pharmaceutical Council (GPhC). The GPhC is the regulator of pharmacists, pharmacy technicians, and pharmacy premises.
In the US, pharmacy technicians perform their duties under the supervision of pharmacists. Although they may perform, under supervision, most dispensing, compounding and other tasks, they are not generally allowed to perform the role of counseling patients on the proper use of their medications. Some states have a legally mandated pharmacist-to-pharmacy technician ratio .
Dispensing assistants are commonly referred to as "dispensers" and in community pharmacies perform largely the same tasks as a pharmacy technician. They work under the supervision of pharmacists and are involved in preparing (dispensing and labelling) medicines for provision to patients.
In the UK, this group of staff can sell certain medicines (including pharmacy only and general sales list medicines) over the counter. They cannot prepare prescription-only medicines for supply to patients.
The earliest known compilation of medicinal substances was the Sushruta Samhita , an Indian Ayurvedic treatise attributed to Sushruta in the 6th century BC. However, the earliest text as preserved dates to the 3rd or 4th century AD.
Many Sumerian (4th millennium BC – early 2nd millennium BC) cuneiform clay tablets record prescriptions for medicine. [ 6 ]
Ancient Egyptian pharmacological knowledge was recorded in various papyri such as the Ebers Papyrus of 1550 BC, and the Edwin Smith Papyrus of the 16th century BC.
In Ancient Greece , Diocles of Carystus (4th century BC) was one of several men studying the medicinal properties of plants. He wrote several treatises on the topic. [ 7 ] The Greek physician Pedanius Dioscorides is famous for writing a five-volume book in his native Greek Περί ύλης ιατρικής in the 1st century AD. The Latin translation De Materia Medica ( Concerning medical substances ) was used as a basis for many medieval texts and was built upon by many middle eastern scientists during the Islamic Golden Age , themselves deriving their knowledge from earlier Greek Byzantine medicine . [ 8 ]
Pharmacy in China dates at least to the earliest known Chinese manual, the Shennong Bencao Jing ( The Divine Farmer's Herb-Root Classic ), dating back to the 1st century AD. It was compiled during the Han dynasty and was attributed to the mythical Shennong . Earlier literature included lists of prescriptions for specific ailments, exemplified by a manuscript "Recipes for 52 Ailments", found in the Mawangdui , sealed in 168 BC.
In Japan, at the end of the Asuka period (538–710) and the early Nara period (710–794), the men who fulfilled roles similar to those of modern pharmacists were highly respected. The place of pharmacists in society was expressly defined in the Taihō Code (701) and re-stated in the Yōrō Code (718). Ranked positions in the pre- Heian Imperial court were established; and this organizational structure remained largely intact until the Meiji Restoration (1868). In this highly stable hierarchy, the pharmacists—and even pharmacist assistants—were assigned status superior to all others in health-related fields such as physicians and acupuncturists. In the Imperial household, the pharmacist was even ranked above the two personal physicians of the Emperor. [ 9 ]
There is a stone sign for a pharmacy shop with a tripod, a mortar, and a pestle opposite one for a doctor in the Arcadian Way in Ephesus near Kusadasi in Turkey. [ 10 ] The current Ephesus dates back to 400 BC and was the site of the Temple of Artemis, one of the seven wonders of the world.
In Baghdad the first pharmacies, or drug stores, were established in 754, [ 11 ] under the Abbasid Caliphate during the Islamic Golden Age . By the 9th century, these pharmacies were state-regulated. [ 12 ] [ unreliable source? ]
The advances made in the Middle East in botany and chemistry led medicine in medieval Islam substantially to develop pharmacology . Muhammad ibn Zakarīya Rāzi (Rhazes) (865–915), for instance, acted to promote the medical uses of chemical compounds. Abu al-Qasim al-Zahrawi (Abulcasis) (936–1013) pioneered the preparation of medicines by sublimation and distillation . His Liber servitoris is of particular interest, as it provides the reader with recipes and explains how to prepare the "simples" from which were compounded the complex drugs then generally used. Sabur Ibn Sahl (d 869), was, however, the first physician to record his findings in a pharmacopoeia , describing a large variety of drugs and remedies for ailments. Al-Biruni (973–1050) wrote one of the most valuable Islamic works on pharmacology, entitled Kitab al-Saydalah ( The Book of Drugs ), in which he detailed the properties of drugs and outlined the role of pharmacy and the functions and duties of the pharmacist. Avicenna , too, described no less than 700 preparations, their properties, modes of action, and their indications. He devoted in fact a whole volume to simple drugs in The Canon of Medicine . Of great impact were also the works by al-Maridini of Baghdad and Cairo , and Ibn al-Wafid (1008–1074), both of which were printed in Latin more than fifty times, appearing as De Medicinis universalibus et particularibus by ' Mesue ' the younger, and the Medicamentis simplicibus by ' Abenguefit '. Peter of Abano (1250–1316) translated and added a supplement to the work of al-Maridini under the title De Veneris . Al-Muwaffaq's contributions in the field are also pioneering. Living in the 10th century, he wrote The foundations of the true properties of Remedies , amongst others describing arsenious oxide , and being acquainted with silicic acid . He made clear distinction between sodium carbonate and potassium carbonate , and drew attention to the poisonous nature of copper compounds, especially copper vitriol , and also lead compounds. He also describes the distillation of sea-water for drinking. [ 13 ] [ 14 ]
In Europe , pharmacy-like shops began to appear during the 12th century. In 1240, emperor Frederic II issued a decree by which the physician's and the apothecary's professions were separated. [ 15 ]
There are pharmacies in Europe that have been in operation since medieval times. In Florence , Italy, the director of the museum in the former Santa Maria Novella pharmacy says that the pharmacy there dates back to 1221. [ 16 ] In Trier (Germany), the Löwen-Apotheke is in operation since 1241, the oldest pharmacy in Europe in continuous operation. [ 17 ] In Dubrovnik (Croatia), a pharmacy that first opened in 1317 is located inside the Franciscan monastery: it is the 2nd oldest pharmacy in Europe that is still operating. [ 18 ] [ 19 ] In the Town Hall Square of Tallinn (Estonia), there is a pharmacy dating from at least 1422. [ citation needed ] The medieval Esteve Pharmacy , located in Llívia , a Catalan enclave close to Puigcerdà , is a museum: the building dates back to the 15th century and the museum keeps albarellos from the 16th and 17th centuries, old prescription books and antique drugs.
Pharmacists practice in a variety of areas including community pharmacies, infusion pharmacies, hospitals, clinics, insurance companies, medical communication companies, research facilities, pharmaceutical companies, extended care facilities, psychiatric hospitals, and regulatory agencies. Pharmacists themselves may have expertise in a medical specialty .
A pharmacy (also known as a chemist in Australia , New Zealand and the British Isles ; or drugstore in North America ; retail pharmacy in industry terminology; or apothecary , historically) is where most pharmacists practice the profession of pharmacy. It is the community pharmacy in which the dichotomy of the profession exists; health professionals who are also retailers.
Community pharmacies usually consist of a retail storefront with a dispensary, where medications are stored and dispensed. According to Sharif Kaf al-Ghazal, the opening of the first drugstores are recorded by Muslim pharmacists in Baghdad in 754 AD. [ 11 ] [ 20 ]
Pharmacies within hospitals differ considerably from community pharmacies. Some pharmacists in hospital pharmacies may have more complex clinical medication management issues, and pharmacists in community pharmacies often have more complex business and customer relations issues.
Because of the complexity of medications including specific indications, effectiveness of treatment regimens, safety of medications (i.e., drug interactions) and patient compliance issues (in the hospital and at home), many pharmacists practicing in hospitals gain more education and training after pharmacy school through a pharmacy practice residency, sometimes followed by another residency in a specific area. Those pharmacists are often referred to as clinical pharmacists and they often specialize in various disciplines of pharmacy.
For example, there are pharmacists who specialize in hematology/oncology, HIV/AIDS, infectious disease, critical care, emergency medicine , toxicology, nuclear pharmacy, pain management, psychiatry, anti-coagulation clinics, herbal medicine , neurology/epilepsy management, pediatrics, neonatal pharmacists and more.
Hospital pharmacies can often be found within the premises of the hospital. Hospital pharmacies usually stock a larger range of medications, including more specialized medications, than would be feasible in the community setting. Most hospital medications are unit-dose, or a single dose of medicine. Hospital pharmacists and trained pharmacy technicians compound sterile products for patients including total parenteral nutrition (TPN), and other medications are given intravenously. That is a complex process that requires adequate training of personnel, quality assurance of products, and adequate facilities.
Several hospital pharmacies have decided to outsource high-risk preparations and some other compounding functions to companies who specialize in compounding. The high cost of medications and drug-related technology and the potential impact of medications and pharmacy services on patient-care outcomes and patient safety require hospital pharmacies to perform at the highest level possible.
Pharmacists provide direct patient care services that optimize the use of medication and promotes health, wellness, and disease prevention. [ 21 ] Clinical pharmacists care for patients in all health care settings, but the clinical pharmacy movement initially began inside hospitals and clinics . Clinical pharmacists often collaborate with physicians and other healthcare professionals to improve pharmaceutical care. Clinical pharmacists are now an integral part of the interdisciplinary approach to patient care. They often participate in patient care rounds for drug product selection. In the UK clinical pharmacists can also prescribe some medications for patients on the National Health Services (NHS) or privately, after completing a non-medical prescribers course to become an Independent Prescriber. [ 22 ]
The clinical pharmacist's role involves creating a comprehensive drug therapy plan for patient-specific problems, identifying goals of therapy, and reviewing all prescribed medications prior to dispensing and administration to the patient. The review process often involves an evaluation of the appropriateness of drug therapy (e.g., drug choice, dose, route, frequency, and duration of therapy) and its efficacy. Research shows that pharmacist led strategies reduce errors related to medication use. [ 23 ] The pharmacist must also consider potential drug interactions, adverse drug reactions, and patient drug allergies while they design and initiate a drug therapy plan. [ 24 ]
Since the emergence of modern clinical pharmacy, ambulatory care pharmacy practice has emerged as a unique pharmacy practice setting. Ambulatory care pharmacy is based primarily on pharmacotherapy services that a pharmacist provides in a clinic. Pharmacists in this setting often do not dispense drugs, but rather see patients in-office visits to manage chronic disease states.
In the U.S. federal health care system (including the VA, the Indian Health Service, and National Institute of Health (NIH) ) ambulatory care pharmacists are given full independent prescribing authority. In some states, such as North Carolina and New Mexico , these pharmacist clinicians are given collaborative prescriptive and diagnostic authority. [ 25 ] In 2011 the board of Pharmaceutical Specialties approved ambulatory care pharmacy practice as a separate board certification. The official designation for pharmacists who pass the ambulatory care pharmacy specialty certification exam will be Board Certified Ambulatory Care Pharmacist and these pharmacists will carry the initials BCACP. [ 26 ]
Compounding involves preparing drugs in forms that are different from the generic prescription standard. This may include altering the strength, ingredients, or dosage form. [ 27 ] Compounding is a way to create custom drugs for patients who may not be able to take the medication in its standard form, such as due to an allergy or difficulty swallowing. Compounding is necessary for these patients to still be able to properly get the prescriptions they need.
One area of compounding is preparing drugs in new dosage forms. For example, if a drug manufacturer only provides a drug as a tablet, a compounding pharmacist might make a medicated lollipop that contains the drug. Patients who have difficulty swallowing the tablet may prefer to suck the medicated lollipop instead.
Another form of compounding is by mixing different strengths (g, mg, mcg) of capsules or tablets to yield the desired amount of medication indicated by the physician , physician assistant , nurse practitioner , or clinical pharmacist practitioner . This form of compounding is found at community or hospital pharmacies or in-home administration therapy.
Compounding pharmacies specialize in compounding, although many also dispense the same non-compounded drugs that patients can obtain from community pharmacies.
Consultant pharmacy practice focuses more on medication regimen review (i.e. "cognitive services") than on actual dispensing of drugs. Consultant pharmacists most typically work in nursing homes , but are increasingly branching into other institutions and non-institutional settings. [ 28 ] Traditionally [ where? ] consultant pharmacists were usually independent business owners, though in the United States many now work for a large pharmacy management company such as Omnicare , Kindred Healthcare or PharMerica . This trend may be gradually reversing [ citation needed ] as consultant pharmacists begin to work directly with patients, primarily because many elderly people are now taking numerous medications but continue to live outside of institutional settings. Some community pharmacies employ consultant pharmacists and/or provide consulting services.
The main principle of consultant pharmacy was developed by Hepler and Strand in 1990. [ 29 ] [ 30 ]
Veterinary pharmacies, sometimes called animal pharmacies , may fall in the category of hospital pharmacy, retail pharmacy or mail-order pharmacy. Veterinary pharmacies stock different varieties and different strengths of medications to fulfill the pharmaceutical needs of animals. Because the needs of animals, as well as the regulations on veterinary medicine , are often very different from those related to people, in some jurisdictions veterinary pharmacy may be kept separate from regular pharmacies.
Nuclear pharmacy focuses on preparing radioactive materials for diagnostic tests and for treating certain diseases. Nuclear pharmacists undergo additional training specific to handling radioactive materials, and unlike in community and hospital pharmacies, nuclear pharmacists typically do not interact directly with patients.
Military pharmacy is a different working environment to civilian practise because military pharmacy technicians perform duties such as evaluating medication orders, preparing medication orders, and dispensing medications. This would be illegal in civilian pharmacies because these duties are required to be performed by a licensed registered pharmacist. [ 31 ] In the US military, state laws that prevent technicians from counseling patients or doing the final medication check prior to dispensing to patients (rather than a pharmacist solely responsible for these duties) do not apply.
Pharmacy informatics is the combination of pharmacy practice science and applied information science. [ 32 ] Pharmacy informaticists work in many practice areas of pharmacy, however, they may also work in information technology departments or for healthcare information technology vendor companies. As a practice area and specialist domain, pharmacy informatics is growing quickly to meet the needs of major national and international patient information projects and health system interoperability goals. Pharmacists in this area are trained to participate in medication management system development, deployment, and optimization.
Specialty pharmacies supply high-cost injectable, oral, infused, or inhaled medications that are used for chronic and complex disease states such as cancer, hepatitis, and rheumatoid arthritis. [ 33 ] Unlike a traditional community pharmacy where prescriptions for any common medication can be brought in and filled, specialty pharmacies carry novel medications that need to be properly stored, administered, carefully monitored, and clinically managed. [ 34 ] In addition to supplying these drugs, specialty pharmacies also provide lab monitoring, adherence counseling, and assist patients with cost-containment strategies needed to obtain their expensive specialty drugs. [ 35 ] In the US, it is currently the fastest-growing sector of the pharmaceutical industry with 19 of 28 newly Food and Drug Administration (FDA) approved medications in 2013 being specialty drugs. [ 36 ]
Due to the demand for clinicians who can properly manage these specific patient populations, the Specialty Pharmacy Certification Board has developed a new certification exam to certify specialty pharmacists. Along with the 100 questions computerized multiple-choice exam, pharmacists must also complete 3,000 hours of specialty pharmacy practice within the past three years as well as 30 hours of specialty pharmacist continuing education within the past two years. [ 37 ]
The pharmaceutical sciences are a group of interdisciplinary areas of study concerned with the design , manufacturing , action , delivery , and classification of drugs . They apply knowledge from chemistry ( inorganic , physical , biochemical and analytical ), biology ( anatomy , physiology , biochemistry , cell biology , and molecular biology ), epidemiology , statistics , chemometrics , mathematics , physics , and chemical engineering . [ 38 ]
The pharmaceutical sciences are further subdivided into several specific specialties , with four main branches:
As new discoveries advance and extend the pharmaceutical sciences, subspecialties continue to be added to this list. Importantly, as knowledge advances, boundaries between these specialty areas of pharmaceutical sciences are beginning to blur. Many fundamental concepts are common to all pharmaceutical sciences. These shared fundamental concepts further the understanding of their applicability to all aspects of pharmaceutical research and drug therapy .
Pharmacocybernetics (also known as pharma-cybernetics, cybernetic pharmacy, and cyber pharmacy) is an emerging field that describes the science of supporting drugs and medications use through the application and evaluation of informatics and internet technologies, so as to improve the pharmaceutical care of patients. [ 44 ]
The word pharmacy is derived from Old French farmacie "substance, such as a food or in the form of a medicine which has a laxative effect" from Medieval Latin pharmacia from Greek pharmakeia ( Ancient Greek : φαρμακεία ) "a medicine", which itself derives from pharmakon ( φάρμακον ), meaning "drug, poison , spell " [ 45 ] [ 46 ] [ a ] (which is etymologically related to pharmakos ).
Separation of prescribing and dispensing, also called dispensing separation, is a practice in medicine and pharmacy in which the physician who provides a medical prescription is independent from the pharmacist who provides the prescription drug .
In the Western world there are centuries of tradition for separating pharmacists from physicians. In Asian countries, it is traditional for physicians to also provide drugs.
In contemporary time researchers and health policy analysts have more deeply considered these traditions and their effects. Advocates for separation and advocates for combining make similar claims for each of their conflicting perspectives, saying that separating or combining reduces conflict of interest in the healthcare industry , unnecessary health care , and lowers costs, while the opposite causes those things. Research in various places reports mixed outcomes in different circumstances.
In 2022 the Organisation for Economic Co-operation and Development (OECD) proposed that pharmaceutical companies should be required to collect and destroy unused or expired medicines that they have put on the market in order to reduce public health risks around the misuse of medicines obtained from waste bins, the development of antimicrobial resistant bacteria from the discharge of antibiotics into environmental systems and "economic losses" from wasted healthcare resources. Potentially harmful concentrations of pharmaceutical waste has been detected in more than a quarter of water samples taken from 258 rivers around the world. OECD recommend that medicines should be collected separately from household waste and that "marketplaces and redistribution platforms for unused close-to-expiry-date medicines" should be set up. Such extended producer responsibility schemes are already running in France, Spain and Portugal. [ 48 ]
In the coming decades, pharmacists are expected to become more integral within the health care system . Rather than simply dispensing medication, pharmacists are increasingly expected to be compensated for their patient care skills. [ 49 ] In particular, Medication Therapy Management (MTM) includes the clinical services that pharmacists can provide for their patients. Such services include a thorough analysis of all medication ( prescription , non-prescription, and herbals) currently being taken by an individual. The result is a reconciliation of medication and patient education resulting in increased patient health outcomes and decreased costs to the health care system. [ 50 ] [ unreliable source? ]
This shift has already commenced in some countries; for instance, pharmacists in Australia receive remuneration from the Australian Government for conducting comprehensive Home Medicines Reviews. In Canada, pharmacists in certain provinces have limited prescribing rights (as in Alberta and British Columbia) or are remunerated by their provincial government for expanded services such as medications reviews (Medschecks in Ontario). In the United Kingdom, pharmacists who undertake additional training are obtaining prescribing rights and this is because of pharmacy education . They are also being paid for by the government for medicine use reviews . In Scotland, the pharmacist can write prescriptions for Scottish registered patients of their regular medications, for the majority of drugs, except for controlled drugs, when the patient is unable to see their doctor, as could happen if they are away from home or the doctor is unavailable. In the United States, pharmaceutical care or clinical pharmacy has had an evolving influence on the practice of pharmacy. [ 51 ] Moreover, the Doctor of Pharmacy (Pharm. D.) degree is now required before entering practice and some pharmacists now complete one or two years of residency or fellowship training following graduation. In addition, consultant pharmacists , who traditionally operated primarily in nursing homes , are now expanding into direct consultation with patients, under the banner of "senior care pharmacy". [ 52 ]
In addition to patient care, pharmacies will be a focal point for medical adherence initiatives. There is enough evidence to show that integrated pharmacy based initiatives significantly impact adherence for chronic patients. For example, a study published in National Institute for Health (NIH) shows "pharmacy based interventions improved patients' medication adherence rates by 2.1 percent and increased physicians' initiation rates by 38 percent, compared to the control group". [ 53 ]
The symbols most commonly associated with pharmacy are the mortar and pestle (North America) and the ℞ ( medical prescription ) character, which is often written as "Rx" in typed text; the green cross in France , Argentina , the United Kingdom , Belgium , Ireland , Italy , Spain , and India ; the Bowl of Hygieia (only) often used in the Netherlands but may be seen combined with other symbols elsewhere. Other common symbols include conical measures , and (in the US) caduceuses , in their logos . A red stylized letter A is used in Germany and Austria (from Apotheke , the German word for pharmacy, from the same Greek root as the English word " apothecary "). The show globe was used in the US until the early 20th century; the Gaper in the Netherlands is increasingly rare. | https://en.wikipedia.org/wiki/Pharmacy |
Pharmacy automation involves the mechanical processes of handling and distributing medications. Any pharmacy task may be involved, including counting small objects (e.g., tablets , capsules ); measuring and mixing powders and liquids for compounding ; tracking and updating customer information in databases (e.g., personally identifiable information (PII), medical history , drug interaction risk detection); and inventory management . This article focuses on the changes that have taken place in the local, or community pharmacy since the 1960s.
Dispensing medications in a community pharmacy before the 1970s was a time-consuming operation. The pharmacist dispensed prescriptions in tablet or capsule form with a simple tray and spatula . Many new medications were developed by pharmaceutical manufacturers at an ever-increasing pace, and medications prices were rising steeply. A typical community pharmacist was working longer hours and often forced to hire staff to handle increased workloads which resulted in less time to focus on safety issues. These additional factors led to use of a machine to count medications. [ 1 ]
The original electronic portable digital tablet counting technology was invented in Manchester, England between 1967 and 1970 by the brothers John and Frank Kirby.
I had the original idea of how the machine would work and it was my patent, but it was a joint effort getting it to work in a saleable form. It was 3 years of very hard work. I had originally studied heavy electrical engineering before changing over to Medical School and qualifying as a Medical Doctor in 1968. In fact I was Senior House (Casualty) Officer (A&E or ER) in 1970 at North Manchester General Hospital when I filed the patent. I must have been the only hospital doctor in Britain with an oscilloscope, a soldering iron and a drawing board in his room in the Doctors' Residence. The housekeepers were bemused by all the wires. Frank originally trained as a Banker but quit to take a job with a local electronics firm during the development. He died in 1987, a terrible loss. [ Extract from personal communication received in March 2010 from John Kirby. ]
Frank and John Kirby and their associate Rodney Lester were pioneers in pharmacy automation and small-object counting technology. In 1967, the Kirbys invented a portable digital tablet counter to count tablets and capsules. [ citation needed ] With Lester they formed a limited company. In 1970, their invention was patented and put into production in Oldham, England . The tablet counter aided the pharmacy industry with time-consuming manual counting of drug prescriptions .
A counting machine consistently counted medications accurately and quickly. This aspect of pharmacy automation was quickly adopted, and innovations emerged every decade to aid the pharmacy industry to deliver medications quickly, safely, and economically. Modern pharmacies have many new options to improve their workflow by using the new technology, and can choose intelligently from the many options available. [ 2 ]
On 1 January 1971 commercial production of the first portable digital tablet counters in the World began. John Kirby had filed U.K. Patent number GB1358378(A) on 8 September 1970 [ 3 ] and U.S. patent number 3789194 on 9 August 1971. [ 4 ] These early electronic counters were designed to help pharmacies replace the common (but often inaccurate) practice of counting medications by hand.
In 1975, the digital technology was exported to America. In early 1980 a dedicated research , development and production facility was built in Oldham, England at a cost of £500,000.
Between 1982 and 1983, two separate development facilities had been created. In America, overseen by Rodney Lester; and in England, overseen by the Kirby brothers. In 1987, Frank Kirby died. In 1989, John Kirby moved his UK facility to Devon, England . [ 5 ]
A simple to operate machine had been developed to accurately and quickly count prescription medications. Technology improvements soon resulted in a more compact model. The price of such equipment in 1980 was around £1,300. This substantial investment in new technology was a major financial consideration , but the pharmacy community considered the use of a counting machine as a superior method compared to hand-counting medications. These early devices became known as tablet counter, capsule counter, pill counter, or drug counter.
The new counting technology replaced manual methods in many industries such as, vitamin and diet supplement manufacturing. Technicians needed a small, affordable device to count and bottle medications. In England and America, the 1980s and 1990s saw new the development of high-speed machines for counting and bottle filling, Like their pharmacy-based counterparts, these industrial units were designed to be fast and simple to operate, yet remain small and cost effective . [ 6 ]
In America, in the late 1990s/early 2000s a new type of tablet counter appeared. It was simple to use, compact, inexpensive, and had good counting accuracy. At the turn of the millennium technical advances allowed the design of counters with a software verification system. With an onboard computer , displaying photo images of medications to assist the pharmacist or pharmacy technician to verify that the correct medication was being dispensed. In addition, a database for storing all prescriptions that were counted on the device. [ 7 ]
Between September 2005 and May 2007, American Capital made a major financial investment in Kirby Lester, which then relocated to a larger facility to expand its research and development capabilities. [ 8 ] This move added extra space for product research and development facility ( R&D ). It allowed the opportunity to develop new advanced technology products that met the pharmacy's needs for simple, accurate, and cost-effective ways to dispense prescriptions safely. [ 9 ]
Pictured here is an early American type of integrated counter and packaging device. This machine was a third generation step in the evolution of pharmacy automated devices. Later models held pre-counted containers of commonly-prescribed medications. [ citation needed ]
In the EU member states legislation was introduced in 1998 which had a major effect on UK Pharmacy operations. It effectively prohibited the use of tablet counters for counting and dispensing bulk packaged tablets. Both usage and sales of the machines in the UK declined rapidly as a result of the introduction of blister packaging for medicines. [ 10 ]
A tablet counter has become a standard in more than 30,000 sites in 35 countries (as of 2010) (including many non-pharmacy sites, such as manufacturing facilities that use a counting machine as a check for small items). [ 11 ]
During the 1990s through 2012, numerous new pharmacy automation products came to market. During this timeframe, counting technologies, robotics, workflow management software, and interactive voice recognition (IVR) systems for retail (both chain and independent), outpatient, government, and closed-door pharmacies (mail order and central fill) were all introduced. Additionally, the concept of scalability - of migrating from an entry-level product to the next level of automation (e.g., counting technology to robotics) - was introduced and subsequently launched a new product line in 1997.
Pharmacists everywhere are making the switch to automation for its increased speed, greater accuracy, and better security. [ 12 ] As the industry evolves and customer expectations grow, automation is becoming less of a luxury and more of a necessity. Especially for independent pharmacies , automation is now a means of keeping up with the competition of large chain pharmacies.
Constant developments in technology make the dispensing of prescription medications safer, more accurate and more efficient.
In America, in 2008, "next-generation" counting and verification systems were introduced. Based on the counting technology employed in preceding models, later machines included the ability to help the pharmacy operate more effectively. Equipped with a new computer interface to a pharmacy management system, with workflow and inventory software. It also included "checks and balances" to ensure the technician and pharmacist were dispensing the correct medication for each patient. This is something that is important to keep reported correctly when dealing with controlled substances like narcotics. This was a step forward to verify all 100% of prescriptions that were dispensed by pharmacy staff.
In America, in 2009, further advanced counters were designed that included the ability to dispense hands-free – a feature that many operators had desired. This allowed pharmacies to automate their most commonly dispensed medications via calibrated cassettes. Thirty of a pharmacy's common medications would now be dispensed automatically. Another new model doubled that throughput via an enclosed robotic mechanism. Robotics had been employed in pharmacies since the mid-1990s, but later machines dispense and label filled patient vials in a comparatively tiny space (about nine square feet of floor space). These newer technologies allowed pharmacy staff to confidently dispense hundreds of prescriptions per day and still be able to manage the many functions of a busy community pharmacy. This would increase the number of patients that are able to be served each day.
The primary purpose of a tablet counter (also known as a pill counter or drug counter) is to accurately count prescription medications in tablet or capsule form to aid the requirement for patient medication safety, to increase efficiency and reduce costs for the typical pharmacy. Newer versions of this counting device include advanced software to continue to improve safety for the patient who is receiving the prescription, ensuring that the pharmacy staff dispense the right medication at correct dosage strength for the right patient. (see also medication safety ) . Today's pharmacy industry recognizes the need for heightened vigilance against medication errors across the entire spectrum. A wealth of research has been conducted regarding the prevalence of medication errors and the ability of technology to decrease or eliminate such errors. (See the March 2003 landmark study by Auburn University's Center for Pharmacy Operations and Designs). [ 13 ] Prescription dispensing safety and accuracy in the pharmacy are an essential part of ensuring the right patient gets the right medication at the right dosage. A trend in pharmacy is to place a greater reliance on technology and pharmacy automation to minimize the chance of human error and speed up the process of dispensing. Pharmacy management generally sees technology as a solution to industry challenges like staffing shortages, prescription volume increases, long and hectic work hours and complicated insurance reimbursement procedures. Pharmacies employ advanced technologies that help to handle an ever-escalating number of prescriptions, while making dispensing safer and more precise.
Perhaps the most controversial debate surrounding the use of pharmacy automated tablet counters is the impact of cross-contamination. Automated tablet-counting machines (sometimes better known as "pill counters") are designed to sort, count, and dispense drugs at high speeds for quick counting transactions. When more than one drug is exposed to the same surface, leaving seemingly unnoticeable traces of residues, the issue of cross-contamination arises. While one tablet is unlikely to leave enough residues to cause harm to a future patient, the risk of contamination increases sevenfold as the machine processes thousands of varying pills throughout the course of a day. A typical pharmacy may on average process under 100 scripts per day, while other larger dispensaries can accommodate a few hundred scripts in that amount of time.
Thoroughly cleaning pharmacy automated tablet counters is recommended to prevent the chance of cross-contamination. This method is widely preached by manufacturers of these machines, but is not always easily followed. Performing an efficient cleaning of an automated tablet counter significantly increases the amount of time spent on counts by users. Many critics argue that these problems can easily be prevented by taking the proper precautions and following all cleaning procedures, but the increase in time spent makes it hard to justify such an investment. The National Institute for Occupational Safety and Health (NIOSH) considers a drug to be hazardous if it exhibits one or more of the following characteristics in humans or animals: carcinogenicity, teratogenicity or developmental toxicity, reproductive toxicity, organ toxicity at low doses, genotoxicity, or structure and toxicity profiles of new drugs that mimic existing hazardous drugs. Specialty pharmacies that stock and dispense medications on the NIOSH list of Hazardous Drugs must follow strict standards. Community pharmacies typically handle some Hazardous Drugs; therefore, using pharmacy automation for Hazardous Drugs generally follows this guideline: pharmacy staff use an exception tray and spatula to count any Hazardous Drug, and decontaminate the tray and spatula immediately following. Pharmacy robots should not store any Hazardous Drugs for chance of pill-grinding and dust-generation. All other medications dispensed in the pharmacy that are not Hazardous Drugs can be counted with pharmacy automation safely if the manufacturer's cleaning directions are followed.
Various companies are currently developing a range of remote tablet counters, verification systems and pharmacy automation components to improve the accuracy, safety, speed and efficiency of medication dispensing. Products that are used in retail, mail order, hospital outpatient and specialty pharmacies as well as industrial settings such as manufacturing and component factories. These advanced systems will continue to provide accurate counting without the need for adjustment or calibration when counting in different production environments.
Pictured here is a modern (2010) remote controlled tablet hopper mechanism for use with bulk packaged individual tablets or capsules . In the UK these items are more suited to Hospital Pharmacies, where the issue of E.U. blister packaging regulations relating to medicine packaging does not apply. Also pictured is another version of an automated machine that does not allow unauthorised interference to the internal store of drugs. (A useful security feature in a large pharmacy with public access.)
The transient or definitive displacement of the solid oral form from the original atmosphere to enter a repackaging process, sometimes automated, is likely to play a primary role in the pharmaceutical controversy in some countries. However, the solid oral dose is to be repackaged in materials with defined quality. Considering these data, a review of the literature for determination of conditions for repackaged drug stability according to different international guidelines is presented by F Lagrange. [ 14 ] | https://en.wikipedia.org/wiki/Pharmacy_automation |
Pharming , a portmanteau of farming and pharmaceutical , refers to the use of genetic engineering to insert genes that code for useful pharmaceuticals into host animals or plants that would otherwise not express those genes, thus creating a genetically modified organism (GMO). [ 1 ] [ 2 ] Pharming is also known as molecular farming , molecular pharming , [ 3 ] or biopharming . [ 4 ]
The products of pharming are recombinant proteins or their metabolic products. Recombinant proteins are most commonly produced using bacteria or yeast in a bioreactor , but pharming offers the advantage to the producer that it does not require expensive infrastructure, and production capacity can be quickly scaled to meet demand, at greatly reduced cost. [ 5 ]
The first recombinant plant-derived protein (PDP) was human serum albumin , initially produced in 1990 in transgenic tobacco and potato plants. [ 6 ] Open field growing trials of these crops began in the United States in 1992 and have taken place every year since. While the United States Department of Agriculture has approved planting of pharma crops in every state, most testing has taken place in Hawaii, Nebraska, Iowa, and Wisconsin. [ 7 ]
In the early 2000s, the pharming industry was robust. Proof of concept has been established for the production of many therapeutic proteins , including antibodies , blood products , cytokines , growth factors , hormones , recombinant enzymes and human and veterinary vaccines . [ 8 ] By 2003 several PDP products for the treatment of human diseases were under development by nearly 200 biotech companies, including recombinant gastric lipase for the treatment of cystic fibrosis , and antibodies for the prevention of dental caries and the treatment of non-Hodgkin's lymphoma . [ 9 ]
However, in late 2002, just as ProdiGene was ramping up production of trypsin for commercial launch [ 10 ] it was discovered that volunteer plants (left over from the prior harvest) of one of their GM corn products were harvested with the conventional soybean crop later planted in that field. [ 11 ] [ unreliable source? ] ProdiGene was fined $250,000 and ordered by the USDA to pay over $3 million in cleanup costs. This raised a furor and set the pharming field back, dramatically. [ 5 ] Many companies went bankrupt as companies faced difficulties getting permits for field trials and investors fled. [ 5 ] In reaction, APHIS introduced more strict regulations for pharming field trials in the US in 2003. [ 12 ] In 2005, Anheuser-Busch threatened to boycott rice grown in Missouri because of plans by Ventria Bioscience to grow pharm rice in the state. A compromise was reached, but Ventria withdrew its permit to plant in Missouri due to unrelated circumstances.
The industry has slowly recovered, by focusing on pharming in simple plants grown in bioreactors and on growing GM crops in greenhouses. [ 13 ] Some companies and academic groups have continued with open-field trials of GM crops that produce drugs. In 2006 Dow AgroSciences received USDA approval to market a vaccine for poultry against Newcastle disease , produced in plant cell culture – the first plant-produced vaccine approved in the U.S. [ 14 ] [ 15 ]
Milk is presently the most mature system to produce recombinant proteins from transgenic organisms. Blood, egg white, seminal plasma , and urine are other theoretically possible systems, but all have drawbacks. Blood, for instance, as of 2012 cannot store high levels of stable recombinant proteins, and biologically active proteins in blood may alter the health of the animals. [ 16 ] Expression in the milk of a mammal, such as a cow, sheep, or goat, is a common application, as milk production is plentiful and purification from milk is relatively easy. Hamsters and rabbits have also been used in preliminary studies because of their faster breeding.
One approach to this technology is the creation of a transgenic mammal that can produce the biopharmaceutical in its milk (or blood or urine). Once an animal is produced, typically using the pronuclear microinjection method, it becomes efficacious to use cloning technology to create additional offspring that carry the favorable modified genome. [ 17 ] In February 2009 the US FDA granted marketing approval for the first drug to be produced in genetically modified livestock. [ 18 ] The drug is called ATryn , which is antithrombin protein purified from the milk of genetically modified goats . Marketing permission was granted by the European Medicines Agency in August 2006. [ 19 ]
As indicated above, some mammals typically used for food production (such as goats, sheep, pigs, and cows) have been modified to produce non-food products, a practice sometimes called pharming. Use of genetically modified goats has been approved by the FDA and EMA to produce ATryn , i.e. recombinant antithrombin , an anticoagulant protein drug . [ 20 ] These products "produced by turning animals into drug-manufacturing 'machines' by genetically modifying them" are sometimes termed biopharmaceuticals .
The patentability of such biopharmaceuticals and their process of manufacture is uncertain. Probably, the biopharmaceuticals themselves so made are unpatentable, assuming that they are chemically identical to the preexisting drugs that they imitate. Several 19th century United States Supreme Court decisions hold that a previously known natural product manufactured by artificial means cannot be patented. [ 21 ] An argument can be made for the patentability of the process for manufacturing a biopharmaceutical, however, because genetically modifying animals so that they will produce the drug is dissimilar to previous methods of manufacture; moreover, one Supreme Court decision seems to hold open that possibility. [ 22 ]
On the other hand, it has been suggested that the recent Supreme Court decision in Mayo v. Prometheus [ 23 ] may create a problem in that, in accordance with the ruling in that case, "it may be said that such and such genes manufacture this protein in the same way they always did in a mammal, they produce the same product, and the genetic modification technology used is conventional, so that the steps of the process 'add nothing to the laws of nature that is not already present. [ 24 ] If the argument prevailed in court, the process would also be ineligible for patent protection. This issue has not yet been decided in the courts.
Plant-made pharmaceuticals (PMPs), also referred to as pharming, is a sub-sector of the biotechnology industry that involves the process of genetically engineering plants so that they can produce certain types of therapeutically important proteins and associated molecules such as peptides and secondary metabolites. The proteins and molecules can then be harvested and used to produce pharmaceuticals. [ 25 ]
Arabidopsis is often used as a model organism to study gene expression in plants, while actual production may be carried out in maize , rice , potatoes , tobacco , flax or safflower . [ 26 ] Tobacco has been a highly popular choice of organism for the expression of transgenes, as it is easily transformed, produces abundant tissues, and survives well in vitro and in greenhouses. [ 27 ] The advantage of rice and flax is that they are self-pollinating, and thus gene flow issues (see below) are avoided. However, human error could still result in modified crops entering the food supply. Using a minor crop such as safflower or tobacco avoids the greater political pressures and risk to the food supply involved with using staple crops such as beans or rice. Expression of proteins in plant cell or hairy root cultures also minimizes risk of gene transfer, but at a higher cost of production. Sterile hybrids may also be used for the bioconfinement of transgenic plants, although stable lines cannot be established. [ 28 ] Grain crops are sometimes chosen for pharming because protein products targeted to the endosperm of cereals have been shown to have high heat stability. This characteristic makes them an appealing target for the production of edible vaccines , as viral coat proteins stored in grains do not require cold storage the way many vaccines currently do. Maintaining a temperature controlled supply chain of vaccines is often difficult when delivering vaccines to developing countries. [ 29 ]
Most commonly, plant transformation is carried out using Agrobacterium tumefaciens . The protein of interest is often expressed under the control of the cauliflower mosaic virus 35S promoter ( CaMV35S ), a powerful constitutive promoter for driving expression in plants. [ 30 ] Localization signals may be attached to the protein of interest to cause accumulation to occur in a specific sub-cellular location, such as chloroplasts or vacuoles. This is done in order to improve yields, simplify purification, or so that the protein folds properly. [ 31 ] [ 32 ] Recently, the inclusion of antisense genes in expression cassettes has been shown to have potential for improving the plant pharming process. Researchers in Japan transformed rice with an antisense SPK gene, which disrupts starch accumulation in rice seeds, so that products would accumulate in a watery sap that is easier to purify. [ 33 ]
Recently, several non-crop plants such as the duckweed Lemna minor or the moss Physcomitrella patens have shown to be useful for the production of biopharmaceuticals. These frugal organisms can be cultivated in bioreactors (as opposed to being grown in fields), secrete the transformed proteins into the growth medium and, thus, substantially reduce the burden of protein purification in preparing recombinant proteins for medical use. [ 34 ] [ 35 ] [ 36 ] In addition, both species can be engineered to cause secretion of proteins with human patterns of glycosylation , an improvement over conventional plant gene-expression systems. [ 37 ] [ 38 ] Biolex Therapeutics developed a duckweed-based expression platform; it sold the business to Synthon and declared bankruptcy in 2012. [ citation needed ]
Additionally, an Israeli company, Protalix, has developed a method to produce therapeutics in cultured transgenic carrot or tobacco cells. [ 39 ] Protalix and its partner, Pfizer, received FDA approval to market its drug, taliglucerase alfa (Elelyso), as a treatment for Gaucher's disease , in 2012. [ 40 ]
The regulation of genetic engineering concerns the approaches taken by governments to assess and manage the risks associated with the development and release of genetically modified crops . There are differences in the regulation of GM crops – including those used for pharming – between countries, with some of the most marked differences occurring between the USA and Europe. Regulation varies in a given country depending on the intended use of the products of the genetic engineering. For example, a crop not intended for food use is generally not reviewed by authorities responsible for food safety.
There are controversies around GMOs generally on several levels, including whether making them is ethical, issues concerning intellectual property and market dynamics; environmental effects of GM crops; and GM crops' role in industrial agricultural more generally. There are also specific controversies around pharming.
Plants do not carry pathogens that might be dangerous to human health . Additionally, on the level of pharmacologically active proteins , there are no proteins in plants that are similar to human proteins. On the other hand, plants are still sufficiently closely related to animals and humans that they are able to correctly process and configure both animal and human proteins. Their seeds and fruits also provide sterile packaging containers for the valuable therapeutics and guarantee a certain storage life. [ 41 ]
Global demand for pharmaceuticals is at unprecedented levels. Expanding the existing microbial systems, although feasible for some therapeutic products, is not a satisfactory option on several grounds. [ 8 ] Many proteins of interest are too complex to be made by microbial systems or by protein synthesis . [ 6 ] [ 41 ] These proteins are currently being produced in animal cell cultures , but the resulting product is often prohibitively expensive for many patients. For these reasons, science has been exploring other options for producing proteins of therapeutic value. [ 2 ] [ 8 ] [ 15 ]
These pharmaceutical crops could become extremely beneficial in developing countries. The World Health Organization estimates that nearly 3 million people die each year from vaccine preventable disease, mostly in Africa. Diseases such as measles and hepatitis lead to deaths in countries where the people cannot afford the high costs of vaccines, but pharm crops could help solve this problem. [ 42 ]
While molecular farming is one application of genetic engineering , there are concerns that are unique to it. In the case of genetically modified (GM) foods, concerns focus on the safety of the food for human consumption . In response, it has been argued that the genes that enhance a crop in some way, such as drought resistance or pesticide resistance , are not believed to affect the food itself. Other GM foods in development, such as fruits designed to ripen faster or grow larger, are believed not to affect humans any differently from non-GM varieties. [ 2 ] [ 15 ] [ 41 ] [ 43 ]
In contrast, molecular farming is not intended for crops destined for the food chain . It produces plants that contain physiologically active compounds that accumulate in the plant’s tissues. Considerable attention is focused, therefore, on the restraint and caution necessary to protect both consumer health and environmental biodiversity . [ 2 ]
The fact that the plants are used to produce drugs alarms activists . They worry that once production begins, the altered plants might find their way into the food supply or cross-pollinate with conventional, non-GM crops. [ 43 ] These concerns have historical validation from the ProdiGene incident, and from the StarLink incident, in which GMO corn accidentally ended up in commercial food products. Activists also are concerned about the power of business. According to the Canadian Food Inspection Agency , in a recent report, says that U.S. demand alone for biotech pharmaceuticals is expanding at 13 percent annually and to reach a market value of $28.6 billion in 2004. [ 43 ] Pharming is expected to be worth $100 billion globally by 2020. [ 44 ]
Please note that this list is by no means exhaustive.
Projects known to be abandoned | https://en.wikipedia.org/wiki/Pharming_(genetics) |
The pharyngeal artery is a branch of the ascending pharyngeal artery . The pharyngeal artery passes inferior-ward in between the superior margin of the superior pharyngeal constrictor muscle , and the levator veli palatini muscle . It issues branches to the constrictor muscles of the pharynx , the stylopharyngeus muscle , the pharyngotympanic tube , and palatine tonsil ; a palatine branch may sometimes be present, replacing the ascending palatine branch of facial artery. [ 1 ]
This cardiovascular system article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pharyngeal_artery |
Pharyngeal aspiration is the introduction of a substance into the pharynx and its subsequent aspiration into the lungs. It is used to test the respiratory toxicity of a substance in animal testing . It began to be used in the late 1990s. [ 1 ] Pharyngeal aspiration is widely used to study the toxicity of a wide variety of substances, including nanomaterials such as carbon nanotubes . [ 2 ]
Pharyngeal aspiration has benefits over the alternative methods of inhalation and intratracheal instillation , the introduction of the substance directly into the trachea . Inhalation studies have the disadvantages that they are expensive and technically difficult, the dose and location of the substance has poor reproducibility, they require large amounts of material, and they potentially allow exposure to laboratory workers and to the skin of laboratory animals. Intratracheal instillation overcomes some of these difficulties, but because a needle or tube is needed to access the trachea, it remains technically challenging and causes trauma to the animal, which can be a confounding factor . It also results in a less uniform distribution of the substance than inhalation, and bypasses effects from the upper respiratory tract . [ 1 ] [ 3 ]
In pharyngeal aspiration, the substance is placed in the pharynx, which is higher in the respiratory tract, avoiding the major source of technical difficulty and trauma to the animal. [ 1 ] The deposition pattern of pharyngeal aspiration is also more dispersed than that of intratracheal instillation, making it more similar to inhalation, and the lung responses are qualitatively similar. [ 2 ] Nevertheless, pharyngeal aspiration still leads to more particle agglomeration than inhalation, making its effects less potent. [ 4 ]
Pharyngeal aspiration is often performed on mice [ 1 ] and rats . [ 5 ] Prior to introduction of the stubstance, the animal is anesthetized and its tongue extended, preventing the animal from swallowing the material and allowing it to be aspirated into the lungs over the course of at least two deep breaths. A liquid suspension of particles in saline solution is usually used, in a typical volume of 50 μL. [ 1 ] Sometimes the substance is introduced into the larynx instead of the pharynx to avoid contamination from food particles and other contaminants present in the mouth. [ 5 ] | https://en.wikipedia.org/wiki/Pharyngeal_aspiration |
The pharyngula is a stage in the embryonic development of vertebrates. [ 1 ] At this stage, the embryos of all vertebrates are similar, having developed features typical of vertebrates, such as the beginning of a spinal cord. Named by William Ballard , [ 2 ] the pharyngula stage follows the blastula , gastrula and neurula stages.
At the pharyngula stage, all vertebrate embryos show remarkable similarities, i.e., it is a " phylotypic stage " of the sub- phylum , [ 3 ] containing the following features:
The branchial grooves are matched on the inside by a series of paired gill pouches . In fish, the pouches and grooves eventually meet and form the gill slits, which allow water to pass from the pharynx over the gills and out the body.
In the other vertebrates, the grooves and pouches disappear. In humans, the chief trace of their existence is the eustachian tube and auditory canal which (interrupted only by the eardrum) connect the pharynx with the outside of the head.
The existence of a common pharyngula stage for vertebrates was first proposed by German biologist Ernst Haeckel (1834–1919) in 1874. [ 4 ]
The observation of the conservation of animal morphology during the embryonic phylotypic period, where there is maximal similarity between the species within each animal phylum, has led to the proposition that embryogenesis diverges more extensively in the early and late stages than the middle stage, and is known as the hourglass model. [ 5 ] Comparative genomic studies suggest that the phylotypic stage is the maximally conserved stage during embryogenesis. [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] | https://en.wikipedia.org/wiki/Pharyngula |
In chemistry , a phase-transfer catalyst or PTC is a catalyst that facilitates the transition of a reactant from one phase into another phase where reaction occurs. Phase-transfer catalysis is a special form of catalysis and can act through homogeneous catalysis or heterogeneous catalysis methods depending on the catalyst used. Ionic reactants are often soluble in an aqueous phase but insoluble in an organic phase in the absence of the phase-transfer catalyst. The catalyst functions like a detergent for solubilizing the salts into the organic phase. Phase-transfer catalysis refers to the acceleration of the reaction upon the addition of the phase-transfer catalyst. PTC is widely exploited industrially. [ 1 ] Polyesters for example are prepared from acyl chlorides and bisphenol-A . Phosphothioate -based pesticides are generated by PTC-catalyzed alkylation of phosphothioates.
In ideal cases, PTC can be fast and efficient, minimizing the need for expensive or dangerous solvents and simplifying purification [ 2 ] Phase-transfer catalysts are especially useful in green chemistry —by allowing the use of water, the need for organic solvents is lowered. [ 3 ] [ 4 ]
Phase-boundary catalysis (PBC) is a type of heterogeneous catalytic system which facilitates the chemical reaction of a particular chemical component in an immiscible phase to react on a catalytic active site located at a phase boundary . The chemical component is soluble in one phase but insoluble in the other. The catalyst for PBC has been designed in which the external part of the zeolite is hydrophobic , internally it is usually hydrophilic , notwithstanding to polar nature of some reactants. [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] In this sense, the medium environment in this system is close to that of an enzyme . The major difference between this system and enzyme is lattice flexibility. The lattice of zeolite is rigid, whereas the enzyme is flexible.
Phase-transfer catalysts for anionic reactants are often quaternary ammonium salts . Commercially important catalysts include benzyltriethylammonium chloride, methyltricaprylammonium chloride and methyltributylammonium chloride. Organic phosphonium salts are also used, e.g., hexadecyltributylphosphonium bromide. The phosphonium salts tolerate higher temperatures.
An alternative to the use of "quat salts" is to convert alkali metal cations into hydrophobic cations. Crown ethers are used for this purpose on the laboratory scale. Polyethylene glycols and their amine derivatives are common in practical applications. One such catalyst is tris(2-(2-methoxyethoxy)ethyl)amine . These ligands encapsulate alkali metal cations (typically Na + and K + ), affording lipophilic cations. Polyethers have a hydrophilic "interiors" containing the ion and a hydrophobic exterior.
Chiral phase-transfer catalysts have also been demonstrated. [ 10 ] Asymmetric alkylations are catalyzed by chiral quaternary ammonium salts derived from cinchona alkaloids . [ 11 ]
A variety of functionalized catalysts have been evaluated for PTC. One example is the Janus interphase catalyst, applicable to organic reactions on the interface of two phases via the formation of Pickering emulsion. [ 12 ]
Quaternary ammonium cations degrade by Hofmann degradation to amines, especially at higher temperatures preferred by process chemists. The resulting amines can be difficult to remove from the product. Phosphonium salt are unstable toward base, degrading to phosphine oxide . [ 1 ]
For example, the nucleophilic substitution reaction of an aqueous sodium cyanide solution with an ethereal solution of 1-bromooctane does not readily occur. The 1-bromooctane is poorly soluble in the aqueous cyanide solution, and the sodium cyanide does not dissolve well in the ether. Upon the addition of small amounts of hexadecyltributylphosphonium bromide, a rapid reaction ensues to give nonyl nitrile:
By the quaternary phosphonium cation, cyanide ions are "ferried" from the aqueous phase into the organic phase. [ 13 ]
Subsequent work demonstrated that many such reactions can be performed rapidly at around room temperature using catalysts such as tetra-n-butylammonium bromide and methyltrioctylammonium chloride in benzene/water systems. [ 14 ]
Phase-boundary catalytic (PBC) systems can be contrasted with conventional catalytic systems. PBC is primarily applicable to reactions at the interface of an aqueous phase and organic phase. In these cases, an approach such as PBC is needed due to the immiscibility of aqueous phases with most organic substrate. In PBC, the catalyst acts at the interface between the aqueous and organic phases. The reaction medium of phase boundary catalysis systems for the catalytic reaction of immiscible aqueous and organic phases consists of three phases; an organic liquid phase, containing most of the substrate, an aqueous liquid phase containing most of the substrate in aqueous phase and the solid catalyst.
In case of conventional catalytic system;
In some systems, without vigorous stirring, no reactivity of the catalyst is observed in conventional catalytic system. [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] Stirring and mass transfer from the organic to the aqueous phase and vice versa are required for conventional catalytic system. Conversely, in PBC, stirring is not required because the mass transfer is not the rate determining step in this catalytic system. It is already demonstrated that this system works for alkene epoxidation without stirring or the addition of a co-solvent to drive liquid–liquid phase transfer. [ 5 ] [ 6 ] [ 7 ] The active site located on the external surface of the zeolite particle were dominantly effective for the observed phase boundary catalytic system. [ 8 ] [ 15 ]
Modified zeolite on which the external surface was partly covered with alkylsilane , called phase-boundary catalyst was prepared in two steps. [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] First, titanium dioxide made from titanium isopropoxide was impregnated into NaY zeolite powder to give sample W-Ti-NaY. In the second step, alkysilane from n-octadecyltrichlorosilane (OTS) was impregnated into the W-Ti-NaY powder containing water. Due to the hydrophilicity of the w-Ti-NaY surface, addition of a small amount of water led to aggregation owing to the capillary force of water between particles. Under these conditions, it is expected that only the outer surface of aggregates, in contact with the organic phase can be modified with OTS, and indeed almost all of the particles were located at the phase boundary when added to an immiscible water–organic solvent (W/O) mixture. The partly modified sample is denoted w/o-Ti-NaY. Fully modified Ti-NaY (o-Ti-NaY), prepared without the addition of water in the above second step, is readily suspended in an organic solvent as expected. | https://en.wikipedia.org/wiki/Phase-boundary_catalysis |
The phase-change incubator is a low-cost, low-maintenance incubator that tests for microorganisms in water supplies. It uses small balls containing a chemical compound that, when heated and then kept insulated, will stay at 37 °C (approx. 99 °F) for 24 hours. This allows cultures to be tested without the need for a laboratory or an expensive portable incubator . Thus it is particularly useful for poor or remote communities. The phase-change incubator was developed in the late 1990s by Amy Smith , when she was a graduate student at MIT . Smith has also started a non-profit organization called A Drop in the Bucket to distribute the incubators and to train people on how to use them to test water quality. Her “Test Water Cheap” system could be used at remote locations to test for bacteria such as E.coli. [ 1 ]
Embrace , an organization that from Stanford University, is applying a similar concept to design low-cost incubators for premature and low birth weight babies in developing countries.
This microbiology -related article is a stub . You can help Wikipedia by expanding it .
This international development -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Phase-change_incubator |
A phase-change material ( PCM ) is a substance which releases/absorbs sufficient energy at phase transition to provide useful heat or cooling. Generally the transition will be from one of the first two fundamental states of matter - solid and liquid - to the other. The phase transition may also be between non-classical states of matter, such as the conformity of crystals, where the material goes from conforming to one crystalline structure to conforming to another, which may be a higher or lower energy state.
The energy released/absorbed by phase transition from solid to liquid, or vice versa, the heat of fusion is generally much higher than the sensible heat . Ice, for example, requires 333.55 J/g to melt, but then water will rise one degree further with the addition of just 4.18 J/g. Water/ice is therefore a very useful phase change material and has been used to store winter cold to cool buildings in summer since at least the time of the Achaemenid Empire .
By melting and solidifying at the phase-change temperature (PCT), a PCM is capable of storing and releasing large amounts of energy compared to sensible heat storage. Heat is absorbed or released when the material changes from solid to liquid and vice versa or when the internal structure of the material changes; PCMs are accordingly referred to as latent heat storage (LHS) materials.
There are two principal classes of phase-change material: organic (carbon-containing) materials derived either from petroleum, from plants or from animals; and salt hydrates, which generally either use natural salts from the sea or from mineral deposits or are by-products of other processes. A third class is solid to solid phase change.
PCMs are used in many different commercial applications where energy storage and/or stable temperatures are required, including, among others, heating pads, cooling for telephone switching boxes, and clothing.
By far the biggest potential market is for building heating and cooling. In this application area, PCMs hold potential in light of the progressive reduction in the cost of renewable electricity, coupled with the intermittent nature of such electricity. This can result in a mismatch between peak demand and availability of supply. In North America, China, Japan, Australia, Southern Europe and other developed countries with hot summers, peak supply is at midday while peak demand is from around 17:00 to 20:00. [ citation needed ] This creates opportunities for thermal storage media.
Solid-liquid phase-change materials are usually encapsulated for installation in the end application, to be contained in the liquid state. In some applications, especially when incorporation to textiles is required, phase change materials are micro-encapsulated . Micro-encapsulation allows the material to remain solid, in the form of small bubbles, when the PCM core has melted.
Latent heat storage can be achieved through changes in the state of matter from liquid→solid, solid→liquid, solid→gas and liquid→gas. However, only solid→liquid and liquid→solid phase changes are practical for PCMs. Although liquid–gas transitions have a higher heat of transformation than solid–liquid transitions, liquid→gas phase changes are impractical for thermal storage because large volumes or high pressures are required to store the materials in their gas phase. Solid–solid phase changes are typically very slow and have a relatively low heat of transformation.
Initially, solid–liquid PCMs behave like sensible heat storage (SHS) materials; their temperature rises as they absorb heat. When PCMs reach their phase change temperature (their melting point) they absorb large amounts of heat at an almost constant temperature until all the material is melted. When the ambient temperature around a liquid material falls, the PCM solidifies, releasing its stored latent heat. A large number of PCMs are available in any required temperature range from −5 up to 190 °C. [ 1 ] Within the human comfort range between 20 and 30 °C, some PCMs are very effective, storing over 200 kJ/kg of latent heat, as against a specific heat capacity of around one kJ/(kg·°C) for masonry. The storage density can therefore be 20 times greater than masonry per kg if a temperature swing of 10 °C is allowed. [ 2 ] However, since the mass of the masonry is far higher than that of PCM this specific (per mass) heat capacity is somewhat offset. A masonry wall might have a mass of 200 kg/m 2 , so to double the heat capacity one would require additional 10 kg/m 2 of PCM.
Hydrocarbons, primarily paraffins (C n H 2 n +2 ) and lipids but also sugar alcohols. [ 4 ] [ 5 ] [ 6 ]
Salt hydrates (M x N y · n H 2 O) [ 9 ]
Many natural building materials are hygroscopic, that is they can absorb (water condenses) and release water (water evaporates). The process is thus:
While this process liberates a small quantity of energy, large surfaces area allows significant (1–2 °C) heating or cooling in buildings. The corresponding materials are wool insulation and earth/clay render finishes.
A specialized group of PCMs that undergo a solid/solid phase transition with the associated absorption and release of large amounts of heat. These materials change their crystalline structure from one lattice configuration to another at a fixed and well-defined temperature, and the transformation can involve latent heats comparable to the most effective solid/liquid PCMs. Such materials are useful because, unlike solid/liquid PCMs, they do not require nucleation to prevent supercooling. Additionally, because it is a solid/solid phase change, there is no visible change in the appearance of the PCM, and there are no problems associated with handling liquids, e.g. containment, potential leakage, etc. Currently the temperature range of solid-solid PCM solutions spans from -50 °C (-58 °F) up to +175 °C (347 °F). [ 15 ] [ 16 ] Therefore, these materials have emerged as promising alternatives to traditional solid/liquid PCMs due to their ability to undergo phase transitions without liquefaction. This property eliminates the risk of leakage and enhances material stability, For example, SSPCMs include polymer-based materials such as polyethylene glycol and metal-organic frameworks (MOFs). In addition, SSPCMs have been explored for use in smart textiles, electronics cooling systems, and thermally adaptive building materials. Research efforts continue to optimize their thermal storage density and improve long-term cycling stability, supporting broader commercial applications. In particular, integrating SSPCMs with nanostructured material and composite frameworks is being investigated to enhance their thermal conductivity and phase transition kinetics.
The phase change material should possess the following thermodynamic properties: [ 17 ]
Kinetic properties
Chemical properties
Economic properties
Key thermophysical properties of phase-change materials include: Melting point (T m ) , Heat of fusion (Δ H fus ) , Specific heat ( c p ) (of solid and liquid phase), Density (ρ) (of solid and liquid phase) and thermal conductivity . The thermal properties of representative PCMs are shown below. [ 18 ] [ 19 ] Values such as volume change and volumetric heat capacity can be calculated there from. One major challenge is the inherently low thermal conductivity of many PCMs, which limits their heat transfer efficiency. To address this problem, high thermal conductivity additives such as carbon nanotube , graphene , and metallic nanoparticles have been introduced to enhance their performance. Another critical issue is supercooling, where the PCM remains in a liquid state below its freezing point. Solutions such as nucleating agents and encapsulation techniques have been developed to mitigate this effect. Additionally, volume expansion during phase transitions can impact material stability, necessitating advanced structural designs and containment strategies. Recent studies have also explored nano-enhanced PCMs and composite structures to further optimize thermal response times and cycling stability. [ 20 ] [ 21 ] This nano-enhanced PCMs, particularly those incorporating metal foams, have been shown to enhance thermal conductivity, improving their efficiency in thermal management applications.
The most commonly used PCMs are salt hydrates , fatty acids and esters , and various paraffins (such as octadecane ). Recently also ionic liquids were investigated as novel PCMs.
As most of the organic solutions are water-free, they can be exposed to air, but all salt based PCM solutions must be encapsulated to prevent water evaporation or uptake. Both types offer certain advantages and disadvantages and if they are correctly applied some of the disadvantages becomes an advantage for certain applications.
They have been used since the late 19th century as a medium for thermal storage applications. They have been used in such diverse applications as refrigerated transportation [ 22 ] for rail [ 23 ] and road applications [ 24 ] and their physical properties are, therefore, well known.
Unlike the ice storage system, however, the PCM systems can be used with any conventional water chiller both for a new or alternatively retrofit application. The positive temperature phase change allows centrifugal and absorption chillers as well as the conventional reciprocating and screw chiller systems or even lower ambient conditions utilizing a cooling tower or dry cooler for charging the TES system.
The temperature range offered by the PCM technology provides a new horizon for the building services and refrigeration engineers regarding medium and high temperature energy storage applications. The scope of this thermal energy application is wide-ranging of solar heating, hot water, heating rejection (i.e., cooling tower), and dry cooler circuitry thermal energy storage applications.
Since PCMs transform between solid–liquid in thermal cycling, encapsulation [ 25 ] naturally became the obvious storage choice.
As phase change materials perform best in small containers, therefore they are usually divided in cells. The cells are shallow to reduce static head – based on the principle of shallow container geometry. The packaging material should conduct heat well; and it should be durable enough to withstand frequent changes in the storage material's volume as phase changes occur. It should also restrict the passage of water through the walls, so the materials will not dry out (or water-out, if the material is hygroscopic ). Packaging must also resist leakage and corrosion . Common packaging materials showing chemical compatibility with room temperature PCMs include stainless steel , polypropylene , and polyolefin .
Nanoparticles such as carbon nanotubes, graphite, graphene, metal and metal oxide can be dispersed in PCM. It is worth noting that inclusion of nanoparticles will not only alter thermal conductivity characteristic of PCM but also other characteristics as well, including latent heat capacity, sub-cooling, phase change temperature and its duration, density and viscosity. The new group of PCMs called NePCM. [ 26 ] NePCMs can be added to metal foams to build even higher thermal conductive combination. [ 27 ]
Thermal composites is a term given to combinations of phase change materials (PCMs) and other (usually solid) structures. A simple example is a copper mesh immersed in paraffin wax. The copper mesh within paraffin wax can be considered a composite material, dubbed a thermal composite. Such hybrid materials are created to achieve specific overall or bulk properties (an example being the encapsulation of paraffin into distinct silicon dioxide nanospheres for increased surface area-to-volume ratio and, thus, higher heat transfer speeds [ 28 ] ).
Thermal conductivity is a common property targeted for maximization by creating thermal composites. In this case, the basic idea is to increase thermal conductivity by adding a highly conducting solid (such as the copper mesh or graphite [ 29 ] ) into the relatively low-conducting PCM, thus increasing overall or bulk (thermal) conductivity. [ 30 ] If the PCM is required to flow, the solid must be porous, such as a mesh.
Solid composites such as fiberglass or kevlar prepreg for the aerospace industry usually refer to a fiber (the kevlar or the glass) and a matrix (the glue, which solidifies to hold fibers and provide compressive strength). A thermal composite is not so clearly defined but could similarly refer to a matrix (solid) and the PCM, which is of course usually liquid and/or solid depending on conditions. They are also meant to discover minor elements in the earth.
PTCPCESMs are composite phase change materials with photo-thermal materials. They have wide applications in various industries, owing to their high thermal conductivity, photo-thermal conversion efficiency, latent heat storage capacity, physicochemical stability, and energy saving effect. [ 31 ]
PTCPCESMs mainly consist of functional carrier materials and organic PCMs. During the solid-liquid phase transition, organic PCMs can absorb and release a large amount of latent heat. Meanwhile, functional carrier materials not only enhance the stability and efficiency of photo-thermal conversion but also introduce various energy conversion functions. [ 31 ] The photo-thermal conversion is related to the band structure and other electric properties of photo-thermal materials, contributing to different absorbing solar spectrum. This is achieved using materials like carbon-based nanostructures (e.g., graphene, CNTs), plasmonic nanoparticles (e.g., Au, Ag), and semiconductors (e.g., TiO 2 , MoS 2 ). Common PCMs include organic materials (paraffins, fatty acids) and inorganic materials (salt hydrates, metal alloys).
Researchers have been working on high-efficiency PTCPCESMs. A combined form of difunctional phase change composites integrated with phase change materials and photothermal conversion materials can reach 51.25% photothermal conversion efficiency and show no leakage under 60 °C for 24 h. [ 32 ] Some researchers synthesized a novel form-stable solar-thermal conversion and storage materials by incorporating amino-functionalized single-walled carbon nanotubes into a polyethyleneglycol based polyurethane PCM, and reached a solar thermal conversion and storage efficiency of 89.3%. [ 33 ]
High-performance PCM development
Recent research has focused on enhancing the efficiency and stability of PCMs through material innovations. New organic-inorganic composite PCMs, such as paraffin-based microencapsulated systems and salt hydrates with enhanced thermal conductivity , have demonstrated improved energy storage capabilities. [ 34 ] In addition, metal-organic frameworks (MOFs) has investigated as a potential PCM candidates due to their tunable phase transition properties and high thermal storage density. [ 35 ]
Applications in energy storage and management
PCMs have been increasingly utilized in energy storage systems, particularly in renewable energy applications. One promising approach is the integrations of PCMs into thermal energy storage units for solar and wind power systems. [ 36 ] By mitigating fluctuations in power generation, these materials enhance reliability of renewable energy sources. Furthermore, the incorporations of PCMs into lithium-ion battery systems has shown potential in managing thermal runaway, thereby improving battery safety and longevity. [ 37 ] [ 38 ] [ 39 ] Additionally PCM-enhanced smart windows and walls have been developed to regulate indoor temperatures and reduce building energy consumption by up to 30%. [ 40 ] PCM-integrated heat pump systems have also demonstrated significant savings in heating and cooling applications.
Challenges and future prospects
Despite their advantages, PCMs face several challenges that must be addressed for widespread implementation. One major limitations is their lower thermal conductivity, which can reduce heat transfer efficiency. To address the above challenge, efforts are underway to incorporate high-thermal-conductivity fillers such as graphene and carbon nanotubes . [ 41 ] Another concern is long-term stability of PCMs, as repeated phase transitions can lead to material degradation and phase separation. Encapsulation techniques and novel stabilizing additives are being developed to overcome these issues. [ 42 ] Looking forward, advancements in nano-enhanced PCMs and hybrid materials are expected to further expand their applications, making them integral to future energy-efficient technologies.
Applications [ 1 ] [ 43 ] of phase change materials include, but are not limited to:
Some phase change materials are suspended in water, and are relatively nontoxic. Others are hydrocarbons or other flammable materials, or are toxic. As such, PCMs must be selected and applied very carefully, in accordance with fire and building codes and sound engineering practices. Because of the increased fire risk, flamespread, smoke, potential for explosion when held in containers, and liability, it may be wise not to use flammable PCMs within residential or other regularly occupied buildings. Phase change materials are also being used in thermal regulation of electronics. | https://en.wikipedia.org/wiki/Phase-change_material |
Phase-contrast imaging is a method of imaging that has a range of different applications. It measures differences in the refractive index of different materials to differentiate between structures under analysis. In conventional light microscopy , phase contrast can be employed to distinguish between structures of similar transparency, and to examine crystals on the basis of their double refraction . This has uses in biological, medical and geological science. In X-ray tomography , the same physical principles can be used to increase image contrast by highlighting small details of differing refractive index within structures that are otherwise uniform. In transmission electron microscopy (TEM), phase contrast enables very high resolution (HR) imaging, making it possible to distinguish features a few Angstrom apart (at this point highest resolution is 40 pm [ 1 ] ).
Phase-contrast imaging is commonly used in atomic physics to describe a range of techniques for dispersively imaging ultracold atoms . Dispersion is the phenomena of the propagation of electromagnetic fields (light) in matter. In general, the refractive index of a material, which alters the phase velocity and refraction of the field, depends on the wavelength or frequency of the light. This is what gives rise to the familiar behavior of prisms , which are seen to split light into its constituent wavelengths. Microscopically, we may think of this behavior as arising from the interaction of the electromagnetic wave with the atomic dipoles . The oscillating force field in turn causes the dipoles to oscillate and in doing so reradiate light with the same polarization and frequency, albeit delayed or phase-shifted from the incident wave. These waves interfere to produce the altered wave which propagates through the medium. If the light is monochromatic (that is, an electromagnetic wave of a single frequency or wavelength), with a frequency close to an atomic transition , the atom will also absorb photons from the light field, reducing the amplitude of the incident wave. Mathematically, these two interaction mechanisms (dispersive and absorptive) are commonly written as the real and imaginary parts, respectively, of a Complex refractive index . [ citation needed ]
Dispersive imaging refers strictly to the measurement of the real part of the refractive index. In phase contrast-imaging, a monochromatic probe field is detuned far away from any atomic transitions to minimize absorption and shone onto an atomic medium (such as a Bose-condensed gas ). Since absorption is minimized, the only effect of the gas on the light is to alter the phase of various points along its wavefront. If we write the incident electromagnetic field as
E i = x ^ E 0 e i ( ω 0 t − k z ) {\displaystyle \mathbf {E} _{i}={\hat {\mathbf {x} }}E_{0}e^{i(\omega _{0}t-kz)}}
then the effect of the medium is to phase shift the wave by some amount Φ {\displaystyle \Phi } which is in general a function of ( x , y ) {\displaystyle (x,y)} in the plane of the object (unless the object is of homogenous density, i.e. of constant index of refraction), where we assume the phase shift to be small, such that we can neglect refractive effects:
E i → E P M = x ^ E 0 e i ( ω 0 t − k z + Φ ) {\displaystyle \mathbf {E} _{i}\to \mathbf {E} _{PM}={\hat {\mathbf {x} }}E_{0}e^{i(\omega _{0}t-kz+\Phi )}}
We may think of this wave as a superposition of smaller bundles of waves each with a corresponding phase shift φ ( x , y ) {\displaystyle \varphi (x,y)} :
E P M = x ^ E 0 A o ∫ ( x , y ) e i ( ω 0 t − k z + φ ( x , y ) ) d x d y {\displaystyle \mathbf {E} _{PM}={\hat {\mathbf {x} }}{\frac {E_{0}}{A_{o}}}\int _{(x,y)}e^{i(\omega _{0}t-kz+\varphi (x,y))}\,dx\,dy}
where A o {\displaystyle A_{o}} is a normalization constant and the integral is over the area of the object plane. Since φ ( x , y ) {\displaystyle \varphi (x,y)} is assumed to be small, we may expand that part of the exponential to first order such that
E P M → x ^ E 0 A o e i ( ω 0 t − k z ) ∫ ( x , y ) ( 1 + i φ ( x , y ) ) d x d y = x ^ E 0 [ cos ( ω 0 t − k z ) − φ ~ A o sin ( ω 0 t − k z ) + i ( φ ~ A o cos ( ω 0 t − k z ) + sin ( ω 0 t − k z ) ) ] {\displaystyle {\begin{aligned}\mathbf {E} _{PM}&\to {\hat {\mathbf {x} }}{\frac {E_{0}}{A_{o}}}e^{i(\omega _{0}t-kz)}\int _{(x,y)}(1+i\varphi (x,y))\,dx\,dy\\&={\hat {\mathbf {x} }}E_{0}{\bigg [}\cos(\omega _{0}t-kz)-{\frac {\tilde {\varphi }}{A_{o}}}\sin(\omega _{0}t-kz)+i{\bigg (}{\frac {\tilde {\varphi }}{A_{o}}}\cos(\omega _{0}t-kz)+\sin(\omega _{0}t-kz){\bigg )}{\bigg ]}\end{aligned}}}
where φ ~ = ∫ φ ( x , y ) d x d y {\displaystyle {\tilde {\varphi }}=\int \varphi (x,y)\,dx\,dy} represents the integral over all small changes in phase to the wavefront due to each point in the area of the object. Looking at the real part of this expression, we find the sum of a wave with the original unshifted phase ω 0 t − k z {\displaystyle \omega _{0}t-kz} , with a wave that is π / 2 {\displaystyle \pi /2} out of phase and has very small amplitude φ ~ A o {\displaystyle {\frac {\tilde {\varphi }}{A_{o}}}} . As written, this is simply another complex wave E 0 e i ξ {\displaystyle E_{0}e^{i\xi }} with phase
ξ = arctan ( φ ~ A o cos ( ω 0 t − k z ) + sin ( ω 0 t − k z ) cos ( ω 0 t − k z ) − φ ~ A o sin ( ω 0 t − k z ) ) {\displaystyle \xi =\arctan {\bigg (}{\frac {{\frac {\tilde {\varphi }}{A_{o}}}\cos(\omega _{0}t-kz)+\sin(\omega _{0}t-kz)}{\cos(\omega _{0}t-kz)-{\frac {\tilde {\varphi }}{A_{o}}}\sin(\omega _{0}t-kz)}}{\bigg )}}
Since imaging systems see only changes in the intensity of the electromagnetic waves, which is proportional to the square of the electric field, we have I P M ∝ | E P M | 2 = | x ^ E 0 e i ξ | 2 = E 0 2 = | E i | 2 = | x ^ E 0 e i ( ω 0 t − k z ) | 2 = E 0 2 {\displaystyle I_{PM}\propto |\mathbf {E} _{PM}|^{2}=|{\hat {\mathbf {x} }}E_{0}e^{i\xi }|^{2}=E_{0}^{2}=|\mathbf {E} _{i}|^{2}=|{\hat {\mathbf {x} }}E_{0}e^{i(\omega _{0}t-kz)}|^{2}=E_{0}^{2}} . We see that both the incident wave and the phase shifted wave are equivalent in this respect. Such objects, which only impart phase changes to light which pass through them, are commonly referred to as phase objects, and are for this reason invisible to any imaging system. However, if we look more closely at the real part of our phase shifted wave
ℜ [ E P M ] = x ^ E 0 [ cos ( ω 0 t − k z ) − φ ~ A o sin ( ω 0 t − k z ) ] {\displaystyle \Re [\mathbf {E} _{PM}]={\hat {\mathbf {x} }}E_{0}{\bigg [}\cos(\omega _{0}t-kz)-{\frac {\tilde {\varphi }}{A_{o}}}\sin(\omega _{0}t-kz){\bigg ]}}
and suppose we could shift the term unaltered by the phase object (the cosine term) by π / 2 {\displaystyle \pi /2} , such that cos ( ω 0 t − k z ) → cos ( ω 0 t − k z + π / 2 ) = sin ( ω 0 t − k z ) {\displaystyle \cos(\omega _{0}t-kz)\to \cos(\omega _{0}t-kz+\pi /2)=\sin(\omega _{0}t-kz)} , then we have
ℜ [ E P M ] = x ^ E 0 ( 1 − φ ~ A o ) sin ( ω 0 t − k z ) {\displaystyle \Re [\mathbf {E} _{PM}]={\hat {\mathbf {x} }}E_{0}{\bigg (}1-{\frac {\tilde {\varphi }}{A_{o}}}{\bigg )}\sin(\omega _{0}t-kz)}
The phase shifts due to the phase object are effectively converted into amplitude fluctuations of a single wave. These would be detectable by an imaging system since the intensity is now I ∝ E 0 2 ( 1 − φ ~ / A o ) 2 {\displaystyle I\propto E_{0}^{2}(1-{\tilde {\varphi }}/A_{o})^{2}} . This is the basis of the idea of phase contrast imaging. [ 2 ] As an example, consider the setup shown in the figure on the right.
A probe laser is incident on a phase object. This could be an atomic medium such as a Bose-Einstein Condensate. [ 3 ] The laser light is detuned far from any atomic resonance, such that the phase object only alters the phase of various points along the portion of the wavefront which pass through the object. The rays which pass through the phase object will diffract as a function of the index of refraction of the medium and diverge as shown by the dotted lines in the figure. The objective lens collimates this light, while focusing the so-called 0-order light, that is, the portion of the beam unaltered by the phase object (solid lines). This light comes to a focus in the focal plane of the objective lens, where a Phase plate can be positioned to delay only the phase of the 0-order beam, bringing it back into phase with the diffracted beam and converting the phase alterations in the diffracted beam into intensity fluctuations at the imaging plane. The phase plate is usually a piece of glass with a raised center encircled by a shallower etch, such that light passing through the center is delayed in phase relative to that passing through the edges.
In polarization contrast imaging, the Faraday effect of the light-matter interaction is leveraged to image the cloud using a standard absorption imaging setup altered with a far detuned probe beam and an extra polarizer. The Faraday effect rotates a linear probe beam polarization as it passes through a cloud polarized by a strong magnetic field in the propagation direction of the probe beam. [ citation needed ]
Classically, a linearly polarized probe beam may be thought of as a superposition of two oppositely handed, circularly polarized beams. The interaction between the rotating magnetic field of each probe beam interacts with the magnetic dipoles of atoms in the sample. If the sample is magnetically polarized in a direction with non-zero projection onto the light field k-vector, the two circularly polarized beams will interact with the magnetic dipoles of the sample with different strengths, corresponding to a relative phase shift between the two beams. This phase shift in turns maps to a rotation of the input beam linear polarization. [ citation needed ]
The quantum physics of the Faraday interaction may be described by the interaction of the second quantized Stokes parameters describing the polarization of a probe light field with the total angular momentum state of the atoms. Thus, if a BEC or other cold, dense sample of atoms is prepared in a particular spin (hyperfine) state polarized parallel to the imaging light propagation direction, both the density and change in spin state may be monitored by feeding the transmitted probe beam through a beam splitter before imaging onto a camera sensor. By adjusting the polarizer optic axis relative to the input linear polarization one can switch between a dark field scheme (zero light in the absence of atoms), and variable phase contrast imaging. [ 4 ] [ 5 ] [ 6 ]
In addition to phase-contrast, there are a number of other similar dispersive imaging methods. In the dark-field method , [ 7 ] the aforementioned phase plate is made completely opaque, such that the 0-order contribution to the beam is totally removed. In the absence of any imaging object the image plane would be dark. This amounts to removing the factor of 1 in the equation
ℜ [ E P M ] = x ^ E 0 ( 1 − φ ~ A o ) sin ( ω 0 t − k z ) → x ^ E 0 φ ~ A o sin ( ω 0 t − k z ) {\displaystyle \Re [\mathbf {E} _{PM}]={\hat {\mathbf {x} }}E_{0}{\bigg (}1-{\frac {\tilde {\varphi }}{A_{o}}}{\bigg )}\sin(\omega _{0}t-kz)\to {\hat {\mathbf {x} }}E_{0}{\frac {\tilde {\varphi }}{A_{o}}}\sin(\omega _{0}t-kz)}
from above. Comparing the squares of the two equations one will find that in the case of dark-ground, the range of contrast (or dynamic range of the intensity signal) is actually reduced. For this reason this method has fallen out of use.
In the defocus-contrast method , [ 8 ] [ 9 ] the phase plate is replaced by a defocusing of the objective lens. Doing so breaks the equivalence of parallel ray path lengths such that a relative phase is acquired between parallel rays. By controlling the amount of defocusing one can thus achieve an effect similar to that of the phase plate in standard phase-contrast. In this case however the defocusing scrambles the phase and amplitude modulation of the diffracted rays from the object in such a way that does not capture the exact phase information of the object, but produces an intensity signal proportional to the amount of phase noise in the object. [ citation needed ]
There is also another method, called bright-field balanced (BBD) method . This method leverages the complementary intensity changes of transmitted disks at different scattering angles that provide straightforward, dose-efficient, and noise-robust phase imaging from atomic resolution to intermediate length scales, such as both light and heavy atomic columns and nanoscale magnetic phases in FeGe samples. [ 10 ]
Phase contrast takes advantage of the fact that different structures have different refractive indices, and either bend, refract or delay the light passage through the sample by different amounts. The changes in the light passage result in waves being 'out of phase' with others. This effect can be transformed by phase contrast microscopes into amplitude differences that are observable in the eyepieces and are depicted effectively as darker or brighter areas of the resultant image. [ citation needed ]
Phase contrast is used extensively in optical microscopy, in both biological and geological sciences. In biology, it is employed in viewing unstained biological samples, making it possible to distinguish between structures that are of similar transparency or refractive indices. [ citation needed ]
In geology, phase contrast is exploited to highlight differences between mineral crystals cut to a standardised thin section (usually 30 μm ) and mounted under a light microscope. Crystalline materials are capable of exhibiting double refraction , in which light rays entering a crystal are split into two beams that may exhibit different refractive indices, depending on the angle at which they enter the crystal. The phase contrast between the two rays can be detected with the human eye using particular optical filters. As the exact nature of the double refraction varies for different crystal structures, phase contrast aids in the identification of minerals. [ citation needed ]
There are four main techniques for X-ray phase-contrast imaging, which use different principles to convert phase variations in the X-rays emerging from the object into intensity variations at an X-ray detector . [ 11 ] [ 12 ] Propagation-based phase contrast [ 13 ] uses free-space propagation to get edge enhancement, Talbot and polychromatic far-field interferometry [ 12 ] [ 14 ] [ 15 ] uses a set of diffraction gratings to measure the derivative of the phase, refraction-enhanced imaging [ 16 ] uses an analyzer crystal also for differential measurement, and x-ray interferometry [ 17 ] uses a crystal interferometer to measure the phase directly. The advantages of these methods compared to normal absorption-contrast X-ray imaging is higher contrast for low-absorbing materials (because phase shift is a different mechanism than absorption) and a contrast-to-noise relationship that increases with spatial frequency (because many phase-contrast techniques detect the first or second derivative of the phase shift), which makes it possible to see smaller details [ 15 ] One disadvantage is that these methods require more sophisticated equipment, such as synchrotron or microfocus X-ray sources, x-ray optics , and high resolution X-ray detectors. This sophisticated equipment provides the sensitivity required to differentiate between small variations in the refractive index of X-rays passing through different media. The refractive index is normally smaller than 1 with a difference from 1 between 10 −7 and 10 −6 . [ citation needed ]
All of these methods produce images that can be used to calculate the projections (integrals) of the refractive index in the imaging direction. For propagation-based phase contrast there are phase-retrieval algorithms, for Talbot interferometry and refraction-enhanced imaging the image is integrated in the proper direction, and for X-ray interferometry phase unwrapping is performed. For this reason they are well suited for tomography , i.e. reconstruction of a 3D-map of the refractive index of the object from many images at slightly different angles. For X-ray radiation the difference from 1 of the refractive index is essentially proportional to the density of the material. [ citation needed ]
Synchrotron X-ray tomography can employ phase contrast imaging to enable imaging of the interior surfaces of objects. In this context, phase contrast imaging is used to enhance the contrast that would normally be possible from conventional radiographic imaging. A difference in the refractive index between a detail and its surroundings causes a phase shift between the light wave that travels through the detail and that which travels outside the detail. An interference pattern results, marking out the detail. [ 18 ]
This method has been used to image Precambrian metazoan embryos from the Doushantuo Formation in China, allowing the internal structure of delicate microfossils to be imaged without destroying the original specimen. [ 19 ]
In the field of transmission electron microscopy , phase-contrast imaging may be employed to image columns of individual atoms; a more common name is high-resolution transmission electron microscopy . It is the highest resolution imaging technique ever developed, and can allow for resolutions of less than one angstrom (less than 0.1 nanometres). It thus enables the direct viewing of columns of atoms in a crystalline material. [ 20 ] [ 21 ]
The interpretation of these images is not a straightforward task. Computer simulations are used to determine what sort of contrast different structures may produce in a phase-contrast image. These commonly use the multislice method of Cowley and Moodie, [ 22 ] and include the phase changes due to the lens aberrations . [ 23 ] These require a reasonable amount of information about the sample and imaging conditions needs to be understood before the image can be properly interpreted, such as what crystal structure the material has.
The images are formed by removing the objective aperture entirely or by using a very large objective aperture. This ensures that not only the transmitted beam, but also the diffracted ones are allowed to contribute to the image. Instruments that are specifically designed for phase-contrast imaging are called HRTEMs (high resolution transmission electron microscopes), and differ from analytical TEMs mainly in the design of the electron beam column. Advances in spherical aberration (Cs) correction have enabled a new generation of HRTEMs to reach significantly better resolutions. [ 24 ] | https://en.wikipedia.org/wiki/Phase-contrast_imaging |
Phase-contrast microscopy (PCM) is an optical microscopy technique that converts phase shifts in light passing through a transparent specimen to brightness changes in the image. Phase shifts themselves are invisible, but become visible when shown as brightness variations.
When light waves travel through a medium other than a vacuum , interaction with the medium causes the wave amplitude and phase to change in a manner dependent on properties of the medium. Changes in amplitude (brightness) arise from the scattering and absorption of light, which is often wavelength-dependent and may give rise to colors. Photographic equipment and the human eye are only sensitive to amplitude variations. Without special arrangements, phase changes are therefore invisible. Yet, phase changes often convey important information.
Phase-contrast microscopy is particularly important in biology.
It reveals many cellular structures that are invisible with a bright-field microscope , as exemplified in the figure.
These structures were made visible to earlier microscopists by staining , but this required additional preparation and death of the cells.
The phase-contrast microscope made it possible for biologists to study living cells and how they proliferate through cell division . It is one of the few methods available to quantify cellular structure and components without using fluorescence . [ 1 ] After its invention in the early 1930s, [ 2 ] phase-contrast microscopy proved to be such an advancement in microscopy that its inventor Frits Zernike was awarded the Nobel Prize in Physics in 1953. [ 3 ] The woman who manufactured this microscope, Caroline Bleeker , often remains uncredited.
The basic principle to make phase changes visible in phase-contrast microscopy is to separate the illuminating (background) light from the specimen-scattered light (which makes up the foreground details) and to manipulate these differently.
The ring-shaped illuminating light (depicted in green in figure) that passes the condenser annulus is focused on the specimen by the condenser. Some of the illuminating light is scattered by the specimen (yellow). The remaining light is unaffected by the specimen and forms the background light (red). When observing an unstained biological specimen, the scattered light is weak and typically phase-shifted by −90° (due to both the typical thickness of specimens and the refractive index difference between biological tissue and the surrounding medium) relative to the background light. This leads to the foreground (blue vector in accompanying figure) and background (red vector) having nearly the same intensity, resulting in low image contrast .
In a phase-contrast microscope, image contrast is increased in two ways: by generating constructive interference between scattered and background light rays in regions of the field of view that contain the specimen, and by reducing the amount of background light that reaches the image plane. [ 4 ] First, the background light is phase-shifted by −90° by passing it through a phase-shift ring, which eliminates the phase difference between the background and the scattered light rays.
When the light is then focused on the image plane (where a camera or eyepiece is placed), this phase shift causes background and scattered light rays originating from regions of the field of view that contain the sample (i.e., the foreground) to constructively interfere , resulting in an increase in the brightness of these areas compared to regions that do not contain the sample. Finally, the background is dimmed ~70-90% by a neutral density filter ring; this method maximizes the amount of scattered light generated by the illumination light, while minimizing the amount of illumination light that reaches the image plane. Some of the scattered light that illuminates the entire surface of the filter will be phase-shifted and dimmed by the rings, but to a much lesser extent than the background light, which only illuminates the phase-shift and neutral density filter rings.
The above describes negative phase contrast . In its positive form, the background light is instead phase-shifted by +90°. The background light will thus be 180° out of phase relative to the scattered light. The scattered light will then be subtracted from the background light to form an image with a darker foreground and a lighter background, as shown in the first figure. [ 5 ] [ 6 ] [ 7 ]
The success of the phase-contrast microscope has led to a number of subsequent phase-imaging methods.
In 1952, Georges Nomarski patented what is today known as differential interference contrast (DIC) microscopy . [ 8 ] It enhances contrast by creating artificial shadows, as if the object is illuminated from the side. But DIC microscopy is unsuitable when the object or its container alter polarization. With the growing use of polarizing plastic containers in cell biology, DIC microscopy is increasingly replaced by Hoffman modulation contrast microscopy , invented by Robert Hoffman in 1975. [ 9 ]
Traditional phase-contrast methods enhance contrast optically, blending brightness and phase information in a single image. Since the introduction of the digital camera in the mid-1990s, several new digital phase-imaging methods have been developed, collectively known as quantitative phase-contrast microscopy . These methods digitally create two separate images, an ordinary bright-field image and a so-called phase-shift image . In each image point, the phase-shift image displays the quantified phase shift induced by the object, which is proportional to the optical thickness of the object. [ 10 ] In this way measurement of the associated optical field can remedy the halo artifacts associated with conventional phase contrast by solving an optical inverse problem to computationally reconstruct the scattering potential of the object. [ 11 ] | https://en.wikipedia.org/wiki/Phase-contrast_microscopy |
A phase-field model is a mathematical model for solving interfacial problems. It has mainly been applied to solidification dynamics, [ 1 ] but it has also been applied to other situations such as viscous fingering , [ 2 ] fracture mechanics, [ 3 ] [ 4 ] [ 5 ] [ 6 ] hydrogen embrittlement , [ 7 ] and vesicle dynamics. [ 8 ] [ 9 ] [ 10 ] [ 11 ]
The method substitutes boundary conditions at the interface by a partial differential equation for the evolution of an auxiliary field (the phase field) that takes the role of an order parameter . This phase field takes two distinct values (for instance +1 and −1) in each of the phases, with a smooth change between both values in the zone around the interface, which is then diffuse with a finite width. A discrete location of the interface may be defined as the collection of all points where the phase field takes a certain value (e.g., 0).
A phase-field model is usually constructed in such a way that in the limit of an infinitesimal interface width (the so-called sharp interface limit) the correct interfacial dynamics are recovered. This approach permits to solve the problem by integrating a set of partial differential equations for the whole system, thus avoiding the explicit treatment of the boundary conditions at the interface.
Phase-field models were first introduced by Fix [ 12 ] and Langer, [ 13 ] and have experienced a growing interest in solidification and other areas. Langer, [ 13 ] had handwritten notes where he showed you could use coupled Cahn-Hilliard and Allen-Cahn equations to solve a solidification problem. George Fix worked on programing problem. Langer felt, at the time, that the method was of no practical use since the interface thickness is so small compared to the size of a typical microstructure, so he never bothered publishing them.
Phase-field models are usually constructed in order to reproduce a given interfacial dynamics. For instance, in solidification problems the front dynamics is given by a diffusion equation for either concentration or temperature in the bulk and some boundary conditions at the interface (a local equilibrium condition and a conservation law), [ 14 ] which constitutes the sharp interface model.
A number of formulations of the phase-field model are based on a free energy function depending on an order parameter (the phase field) and a diffusive field (variational formulations). Equations of the model are then obtained by using general relations of statistical physics . Such a function is constructed from physical considerations, but contains a parameter or combination of parameters related to the interface width. Parameters of the model are then chosen by studying the limit of the model with this width going to zero, in such a way that one can identify this limit with the intended sharp interface model.
Other formulations start by writing directly the phase-field equations, without referring to any thermodynamical functional (non-variational formulations). In this case the only reference is the sharp interface model, in the sense that it should be recovered when performing the small interface width limit of the phase-field model.
Phase-field equations in principle reproduce the interfacial dynamics when the interface width is small compared with the smallest length scale in the problem. In solidification this scale is the capillary length d o {\displaystyle d_{o}} , which is a microscopic scale. From a computational point of view integration of partial differential equations resolving such a small scale is prohibitive. However, Karma and Rappel introduced the thin interface limit, [ 15 ] which permitted to relax this condition and has opened the way to practical quantitative simulations with phase-field models.
With the increasing power of computers and the theoretical progress in phase-field modelling, phase-field models have become a useful tool for the numerical simulation of interfacial problems.
A model for a phase field can be constructed by physical arguments if one has an explicit expression for the free energy of the system. A simple example for solidification problems is the following:
where φ {\displaystyle \varphi } is the phase field, u ( φ ) = e / e 0 + h ( φ ) / 2 {\displaystyle u(\varphi )=e/e_{0}+h(\varphi )/2} , e {\displaystyle e} is the local enthalpy per unit volume, h {\displaystyle h} is a certain polynomial function of φ {\displaystyle \varphi } , and e 0 = L 2 / T M c p {\displaystyle e_{0}={L^{2}}/{T_{M}c_{p}}} (where L {\displaystyle L} is the latent heat , T M {\displaystyle T_{M}} is the melting temperature, and c p {\displaystyle c_{p}} is the specific heat). The term with ∇ φ {\displaystyle \nabla \varphi } corresponds to the interfacial energy. The function f ( φ ) {\displaystyle f(\varphi )} is usually taken as a double-well potential describing the free energy density of the bulk of each phase, which themselves correspond to the two minima of the function f ( φ ) {\displaystyle f(\varphi )} . The constants K {\displaystyle K} and h 0 {\displaystyle h_{0}} have respectively dimensions of energy per unit length and energy per unit volume. The interface width is then given by W = K / h 0 {\displaystyle W={\sqrt {K/h_{0}}}} .
The phase-field model can then be obtained from the following variational relations: [ 16 ]
where D is a diffusion coefficient for the variable e {\displaystyle e} , and η {\displaystyle \eta } and q e {\displaystyle \mathbf {q} _{e}} are stochastic terms accounting for thermal fluctuations (and whose statistical properties can be obtained from the fluctuation dissipation theorem ). The first equation gives an equation for the evolution of the phase field, whereas the second one is a diffusion equation, which usually is rewritten for the temperature or for the concentration (in the case of an alloy). These equations are, scaling space with l {\displaystyle l} and times with l 2 / D {\displaystyle l^{2}/D} :
where ε = W / l {\displaystyle \varepsilon =W/l} is the nondimensional interface width, α = D τ / W 2 h 0 {\displaystyle \alpha ={D\tau }/{W^{2}h_{0}}} , and η ~ ( r , t ) {\displaystyle {\tilde {\eta }}({\mathbf {r} },t)} , q u ( r , t ) {\displaystyle \mathbf {q} _{u}(\mathbf {r} ,t)} are nondimensionalized noises.
The choice of free energy function, f ( φ ) {\displaystyle f(\varphi )} , can have a significant effect on the physical behaviour of the interface, and should be selected with care. The double-well function represents an approximation of the Van der Waals equation of state near the critical point, and has historically been used for its simplicity of implementation when the phase-field model is employed solely for interface tracking purposes. But this has led to the frequently observed spontaneous drop shrinkage phenomenon, whereby the high phase miscibility predicted by an Equation of State near the critical point allows significant interpenetration of the phases and can eventually lead to the complete disappearance of a droplet whose radius is below some critical value. [ 17 ] Minimizing perceived continuity losses over the duration of a simulation requires limits on the Mobility parameter, resulting in a delicate balance between interfacial smearing due to convection, interfacial reconstruction due to free energy minimization (i.e. mobility-based diffusion), and phase interpenetration, also dependent on the mobility. A recent review of alternative energy density functions for interface tracking applications has proposed a modified form of the double-obstacle function which avoids the spontaneous drop shrinkage phenomena and limits on mobility, [ 18 ] with comparative results provide for a number of benchmark simulations using the double-well function and the volume-of-fluid sharp interface technique. The proposed implementation has a computational complexity only slightly greater than that of the double-well function, and may prove useful for interface tracking applications of the phase-field model where the duration/nature of the simulated phenomena introduces phase continuity concerns (i.e. small droplets, extended simulations, multiple interfaces, etc.).
A phase-field model can be constructed to purposely reproduce a given interfacial dynamics as represented by a sharp interface model. In such a case the sharp interface limit (i.e. the limit when the interface width goes to zero) of the proposed set of phase-field equations should be performed. This limit is usually taken by asymptotic expansions of the fields of the model in powers of the interface width ε {\displaystyle \varepsilon } . These expansions are performed both in the interfacial region (inner expansion) and in the bulk (outer expansion), and then are asymptotically matched order by order. The result gives a partial differential equation for the diffusive field and a series of boundary conditions at the interface, which should correspond to the sharp interface model and whose comparison with it provides the values of the parameters of the phase-field model.
Whereas such expansions were in early phase-field models performed up to the lower order in ε {\displaystyle \varepsilon } only, more recent models use higher order asymptotics (thin interface limits) in order to cancel undesired spurious effects or to include new physics in the model. For example, this technique has permitted to cancel kinetic effects, [ 15 ] to treat cases with unequal diffusivities in the phases, [ 19 ] to model viscous fingering [ 2 ] and two-phase Navier–Stokes flows, [ 20 ] to include fluctuations in the model, [ 21 ] etc.
In multiphase-field models, microstructure is described by set of order parameters, each of which is related to a specific phase or crystallographic orientation. This model is mostly used for solid-state phase transformations where multiple grains evolve (e.g. grain growth , recrystallization or first-order transformation like austenite to ferrite in ferrous alloys). Besides allowing the description of multiple grains in a microstructure, multiphase-field models especially allow for consideration of multiple thermodynamic phases occurring e.g. in technical alloy grades. [ 22 ]
Many of the results for continuum phase-field models have discrete analogues for graphs, just replacing calculus with calculus on graphs .
Fracture in solids is often numerically analyzed within a finite element context using either discrete or diffuse crack representations. Approaches using a finite element representation often make use of strong discontinuities embedded at the intra-element level and often require additional criteria based on, e.g., stresses, strain energy densities or energy release rates or other special treatments such as virtual crack closure techniques and remeshing to determine crack paths. In contrast, approaches using a diffuse crack representation retain the continuity of the displacement field, such as continuum damage models and phase-field fracture theories. The latter traces back to the reformulation of Griffith’s principle in a variational form and has similarities to gradient-enhanced damage-type models. Perhaps the most attractive characteristic of phase-field approaches to fracture is that crack initiation and crack paths are automatically obtained from a minimization problem that couples the elastic and fracture energies. In many situations, crack nucleation can be properly accounted for by following branches of critical points associated with elastic solutions until they lose stability. In particular, phase-field models of fracture can allow nucleation even when the elastic strain energy density is spatially constant. [ 23 ] A limitation of this approach is that nucleation is based on strain energy density and not stress. An alternative view based on introducing a nucleation driving force seeks to address this issue. [ 24 ]
A group of biological cells can self-propel in a complex way due to the consumption of Adenosine triphosphate . Interactions between cells like cohesion or several chemical cues can produce movement in a coordinated manner, this phenomenon is called "Collective cell migration". A theoretical model for these phenomena is the phase-field model [ 25 ] [ 26 ] [ 27 ] and incorporates a phase field for each cell species and additional field variables like chemotactic agent concentration. Such a model can be used for phenomena like cancer, cell extrusion , [ 28 ] wound healing, morphogenesis and ectoplasm phenomena . | https://en.wikipedia.org/wiki/Phase-field_model |
Phase-field models on graphs are a discrete analogue to phase-field models , defined on a graph . They are used in image analysis (for feature identification) and for the segmentation of social networks .
For a graph with vertices V and edge weights ω i , j {\displaystyle \omega _{i,j}} , the graph Ginzburg–Landau functional of a map u : V → R {\displaystyle u:V\to \mathbb {R} } is given by
where W is a double well potential, for example the quartic potential W ( x ) = x 2 (1 − x 2 ). The graph Ginzburg–Landau functional was introduced by Bertozzi and Flenner. [ 1 ] In analogy to continuum phase-field models, where regions with u close to 0 or 1 are models for two phases of the material, vertices can be classified into those with u j close to 0 or close to 1, and for small ε {\displaystyle \varepsilon } , minimisers of F ε {\displaystyle F_{\varepsilon }} will satisfy that u j is close to 0 or 1 for most nodes, splitting the nodes into two classes.
To effectively minimise F ε {\displaystyle F_{\varepsilon }} , a natural approach is by gradient flow ( steepest descent ). This means to introduce an artificial time parameter and to solve the graph version of the Allen–Cahn equation ,
where Δ {\displaystyle \Delta } is the graph Laplacian . The ordinary continuum Allen–Cahn equation and the graph Allen–Cahn equation are natural counterparts, just replacing ordinary calculus by calculus on graphs .
A convergence result for a numerical graph Allen–Cahn scheme has been established by Luo and Bertozzi. [ 2 ]
It is also possible to adapt other computational schemes for mean curvature flow , for example schemes involving thresholding like the Merriman–Bence–Osher scheme, to a graph setting, with analogous results. [ 3 ] | https://en.wikipedia.org/wiki/Phase-field_models_on_graphs |
A phase-locked loop or phase lock loop ( PLL ) is a control system that generates an output signal whose phase is fixed relative to the phase of an input signal. Keeping the input and output phase in lockstep also implies keeping the input and output frequencies the same, thus a phase-locked loop can also track an input frequency. Furthermore, by incorporating a frequency divider , a PLL can generate a stable frequency that is a multiple of the input frequency.
These properties are used for clock synchronization, demodulation , frequency synthesis , clock multipliers , and signal recovery from a noisy communication channel. Since 1969, a single integrated circuit can provide a complete PLL building block, and nowadays have output frequencies from a fraction of a hertz up to many gigahertz . Thus, PLLs are widely employed in radio , telecommunications , computers (e.g. to distribute precisely timed clock signals in microprocessors ), grid-tie inverters (electronic power converters used to integrate DC renewable resources and storage elements such as photovoltaics and batteries with the power grid), and other electronic applications.
A simple analog PLL is an electronic circuit consisting of a variable frequency oscillator and a phase detector in a feedback loop (Figure 1). The oscillator generates a periodic signal V o with frequency proportional to an applied voltage, hence the term voltage-controlled oscillator (VCO). The phase detector compares the phase of the VCO's output signal with the phase of periodic input reference signal V i and outputs a voltage (stabilized by the filter) to adjust the oscillator's frequency to match the phase of V o to the phase of V i .
Phase can be proportional to time , [ a ] so a phase difference can correspond to a time difference.
Left alone, different clocks will mark time at slightly different rates. A mechanical clock , for example, might be fast or slow by a few seconds per hour compared to a reference atomic clock (such as the NIST-F2 ). That time difference becomes substantial over time. Instead, the owner can synchronize their mechanical clock (with varying degrees of accuracy) by phase-locking it to a reference clock.
An inefficient synchronization method involves the owner resetting their clock to that more accurate clock's time every week. But, left alone, their clock will still continue to diverge from the reference clock at the same few seconds per hour rate.
A more efficient synchronization method (analogous to the simple PLL in Figure 1) utilizes the fast-slow timing adjust control (analogous to how the VCO's frequency can be adjusted) available on some clocks. Analogously to the phase comparator, the owner could notice their clock's misalignment and turn its timing adjust a small proportional amount to make their clock's frequency a little slower (if their clock was fast) or faster (if their clock was slow). If they don't overcompensate, then their clock will be more accurate than before. Over a series of such weekly adjustments, their clock's notion of a second would agree close enough with the reference clock, so they could be said to be locked both in frequency and phase.
An early electromechanical version of a phase-locked loop was used in 1921 in the Shortt-Synchronome clock .
Spontaneous synchronization of weakly coupled pendulum clocks was noted by the Dutch physicist Christiaan Huygens as early as 1673. [ 1 ] Around the turn of the 19th century, Lord Rayleigh observed synchronization of weakly coupled organ pipes and tuning forks . [ 2 ] In 1919, W. H. Eccles and J. H. Vincent found that two electronic oscillators that had been tuned to oscillate at slightly different frequencies but that were coupled to a resonant circuit would soon oscillate at the same frequency. [ 3 ] Automatic synchronization of electronic oscillators was described in 1923 by Edward Victor Appleton . [ 4 ]
In 1925, David Robertson , first professor of electrical engineering at the University of Bristol , introduced phase locking in his clock design to control the striking of the bell Great George in the new Wills Memorial Building . Robertson's clock incorporated an electromechanical device that could vary the rate of oscillation of the pendulum, and derived correction signals from a circuit that compared the pendulum phase with that of an incoming telegraph pulse from Greenwich Observatory every morning at 10:00 GMT. Including equivalents of every element of a modern electronic PLL, Robertson's system was notably ahead of its time in that its phase detector was a relay logic implementation of the transistor circuits for phase/frequency detectors not seen until the 1970s.
Robertson's work predated research towards what was later named the phase-lock loop in 1932, when British researchers developed an alternative to Edwin Armstrong 's superheterodyne receiver , the Homodyne or direct-conversion receiver . In the homodyne or synchrodyne system, a local oscillator was tuned to the desired input frequency and multiplied with the input signal. The resulting output signal included the original modulation information. The intent was to develop an alternative receiver circuit that required fewer tuned circuits than the superheterodyne receiver. Since the local oscillator would rapidly drift in frequency, an automatic correction signal was applied to the oscillator, maintaining it in the same phase and frequency of the desired signal. The technique was described in 1932, in a paper by Henri de Bellescize, in the French journal L'Onde Électrique . [ 5 ] [ 6 ] [ 7 ]
In analog television receivers since at least the late 1930s, phase-locked-loop horizontal and vertical sweep circuits are locked to synchronization pulses in the broadcast signal. [ 8 ]
In 1969, Signetics introduced a line of low-cost monolithic integrated circuits like the NE565 using bipolar transistors , that were complete phase-locked loop systems on a chip, [ 9 ] and applications for the technique multiplied. A few years later, RCA introduced the CD4046 Micropower Phase-Locked Loop using CMOS , which also became a popular integrated circuit building block.
Phase-locked loop mechanisms may be implemented as either analog or digital circuits. Both implementations use the same basic structure.
Analog PLL circuits include four basic elements:
There are several variations of PLLs. Some terms that are used are "analog phase-locked loop" (APLL), also referred to as a linear phase-locked loop" (LPLL), "digital phase-locked loop" (DPLL), "all digital phase-locked loop" (ADPLL), and "software phase-locked loop" (SPLL). [ 10 ]
Phase-locked loops are widely used for synchronization purposes; in space communications for coherent demodulation and threshold extension , bit synchronization , and symbol synchronization. Phase-locked loops can also be used to demodulate frequency-modulated signals. In radio transmitters, a PLL is used to synthesize new frequencies which are a multiple of a reference frequency, with the same stability as the reference frequency. [ 13 ]
Other applications include:
Some data streams, especially high-speed serial data streams (such as the raw stream of data from the magnetic head of a disk drive), are sent without an accompanying clock. The receiver generates a clock from an approximate frequency reference, and then uses a PLL to phase-align it to the data stream's signal edges . This process is referred to as clock recovery . For this scheme to work, the data stream must have edges frequently-enough to correct any drift in the PLL's oscillator. Thus a line code with a hard upper bound on the maximum time between edges (e.g. 8b/10b encoding ) is typically used to encode the data.
If a clock is sent in parallel with data, that clock can be used to sample the data. Because the clock must be received and amplified before it can drive the flip-flops which sample the data, there will be a finite, and process-, temperature-, and voltage-dependent delay between the detected clock edge and the received data window. This delay limits the frequency at which data can be sent. One way of eliminating this delay is to include a deskew PLL on the receive side, so that the clock at each data flip-flop is phase-matched to the received clock. In that type of application, a special form of a PLL called a delay-locked loop (DLL) is frequently used. [ 14 ]
Many electronic systems include processors of various sorts that operate at hundreds of megahertz to gigahertz, well above the practical frequencies of crystal oscillators . Typically, the clocks supplied to these processors come from clock generator PLLs, which multiply a lower-frequency reference clock (usually 50 or 100 MHz) up to the operating frequency of the processor. The multiplication factor can be quite large in cases where the operating frequency is multiple gigahertz and the reference crystal is just tens or hundreds of megahertz.
All electronic systems emit some unwanted radio frequency energy. Various regulatory agencies (such as the FCC in the United States) put limits on the emitted energy and any interference caused by it. The emitted noise generally appears at sharp spectral peaks (usually at the operating frequency of the device, and a few harmonics). A system designer can use a spread-spectrum PLL to reduce interference with high-Q receivers by spreading the energy over a larger portion of the spectrum. For example, by changing the operating frequency up and down by a small amount (about 1%), a device running at hundreds of megahertz can spread its interference evenly over a few megahertz of spectrum, which drastically reduces the amount of noise seen on broadcast FM radio channels, which have a bandwidth of several tens of kilohertz.
Typically, the reference clock enters the chip and drives a phase locked loop (PLL), which then drives the system's clock distribution. The clock distribution is usually balanced so that the clock arrives at every endpoint simultaneously. One of those endpoints is the PLL's feedback input. The function of the PLL is to compare the distributed clock to the incoming reference clock, and vary the phase and frequency of its output until the reference and feedback clocks are phase and frequency matched.
PLLs are ubiquitous—they tune clocks in systems several feet across, as well as clocks in small portions of individual chips. Sometimes the reference clock may not actually be a pure clock at all, but rather a data stream with enough transitions that the PLL is able to recover a regular clock from that stream. Sometimes the reference clock is the same frequency as the clock driven through the clock distribution, other times the distributed clock may be some rational multiple of the reference.
A PLL may be used to synchronously demodulate amplitude modulated (AM) signals. The PLL recovers the phase and frequency of the incoming AM signal's carrier. The recovered phase at the VCO differs from the carrier's by 90°, so it is shifted in phase to match, and then fed to a multiplier. The output of the multiplier contains both the sum and the difference frequency signals, and the demodulated output is obtained by low-pass filtering . Since the PLL responds only to the carrier frequencies which are very close to the VCO output, a PLL AM detector exhibits a high degree of selectivity and noise immunity which is not possible with conventional peak type AM demodulators. However, the loop may lose lock where AM signals have 100% modulation depth. [ 15 ]
One desirable property of all PLLs is that the reference and feedback clock edges be brought into very close alignment. The average difference in time between the phases of the two signals when the PLL has achieved lock is called the static phase offset (also called the steady-state phase error ). The variance between these phases is called tracking jitter . Ideally, the static phase offset should be zero, and the tracking jitter should be as low as possible. [ dubious – discuss ]
Phase noise is another type of jitter observed in PLLs, and is caused by the oscillator itself and by elements used in the oscillator's frequency control circuit. Some technologies are known to perform better than others in this regard. The best digital PLLs are constructed with emitter-coupled logic ( ECL ) elements, at the expense of high power consumption. To keep phase noise low in PLL circuits, it is best to avoid saturating logic families such as transistor-transistor logic ( TTL ) or CMOS . [ 16 ]
Another desirable property of all PLLs is that the phase and frequency of the generated clock be unaffected by rapid changes in the voltages of the power and ground supply lines, as well as the voltage of the substrate on which the PLL circuits are fabricated. This is called substrate and supply noise rejection . The higher the noise rejection, the better.
To further improve the phase noise of the output, an injection locked oscillator can be employed following the VCO in the PLL.
In digital wireless communication systems (GSM, CDMA etc.), PLLs are used to provide the local oscillator up-conversion during transmission and down-conversion during reception. In most cellular handsets this function has been largely integrated into a single integrated circuit to reduce the cost and size of the handset. However, due to the high performance required of base station terminals, the transmission and reception circuits are built with discrete components to achieve the levels of performance required. GSM local oscillator modules are typically built with a frequency synthesizer integrated circuit and discrete resonator VCOs. [ citation needed ]
Grid-tie inverters based on voltage source inverters source or sink real power into the AC electric grid as a function of the phase angle of the voltage they generate relative to the grid's voltage phase angle, which is measured using a PLL. In photovoltaic applications, the more the sine wave produced leads the grid voltage wave, the more power is injected into the grid. For battery applications, the more the sine wave produced lags the grid voltage wave, the more the battery charges from the grid, and the more the sine wave produced leads the grid voltage wave, the more the battery discharges into the grid. [ citation needed ]
The block diagram shown in the figure shows an input signal, F I , which is used to generate an output, F O . The input signal is often called the reference signal (also abbreviated F REF ). [ 17 ]
At the input, a phase detector (shown as the Phase frequency detector and Charge pump blocks in the figure) compares two input signals, producing an error signal which is proportional to their phase difference. The error signal is then low-pass filtered and used to drive a VCO which creates an output phase. The output is fed through an optional divider back to the input of the system, producing a negative feedback loop . If the output phase drifts, the error signal will increase, driving the VCO phase in the opposite direction so as to reduce the error. Thus the output phase is locked to the phase of the input.
Analog phase locked loops are generally built with an analog phase detector, low-pass filter and VCO placed in a negative feedback configuration. A digital phase locked loop uses a digital phase detector; it may also have a divider in the feedback path or in the reference path, or both, in order to make the PLL's output signal frequency a rational multiple of the reference frequency. A non-integer multiple of the reference frequency can also be created by replacing the simple divide-by- N counter in the feedback path with a programmable pulse swallowing counter . This technique is usually referred to as a fractional-N synthesizer or fractional-N PLL. [ dubious – discuss ]
The oscillator generates a periodic output signal. Assume that initially the oscillator is at nearly the same frequency as the reference signal. If the phase from the oscillator falls behind that of the reference, the phase detector changes the control voltage of the oscillator so that it speeds up. Likewise, if the phase creeps ahead of the reference, the phase detector changes the control voltage to slow down the oscillator. Since initially the oscillator may be far from the reference frequency, practical phase detectors may also respond to frequency differences, so as to increase the lock-in range of allowable inputs. Depending on the application, either the output of the controlled oscillator, or the control signal to the oscillator, provides the useful output of the PLL system. [ citation needed ]
A phase detector (PD) generates a voltage, which represents the phase difference between two signals. In a PLL, the two inputs of the phase detector are the reference input and the feedback from the VCO. The PD output voltage is used to control the VCO such that the phase difference between the two inputs is held constant, making it a negative feedback system. [ 18 ]
Different types of phase detectors have different performance characteristics.
For instance, the frequency mixer produces harmonics that adds complexity in applications where spectral purity of the VCO signal is important. The resulting unwanted (spurious) sidebands, also called " reference spurs " can dominate the filter requirements and reduce the capture range well below or increase the lock time beyond the requirements. In these applications the more complex digital phase detectors are used which do not have as severe a reference spur component on their output. Also, when in lock, the steady-state phase difference at the inputs using this type of phase detector is near 90 degrees. [ citation needed ]
In PLL applications it is frequently required to know when the loop is out of lock. The more complex digital phase-frequency detectors usually have an output that allows a reliable indication of an out of lock condition.
An XOR gate is often used for digital PLLs as an effective yet simple phase detector. It can also be used in an analog sense with only slight modification to the circuitry.
The block commonly called the PLL loop filter (usually a low-pass filter) generally has two distinct functions.
The primary function is to determine loop dynamics, also called stability . This is how the loop responds to disturbances, such as changes in the reference frequency, changes of the feedback divider, or at startup. Common considerations are the range over which the loop can achieve lock (pull-in range, lock range or capture range), how fast the loop achieves lock (lock time, lock-up time or settling time ) and damping behavior. Depending on the application, this may require one or more of the following: a simple proportion (gain or attenuation), an integral (low-pass filter) and/or derivative ( high-pass filter ). Loop parameters commonly examined for this are the loop's gain margin and phase margin . Common concepts in control theory including the PID controller are used to design this function.
The second common consideration is limiting the amount of reference frequency energy (ripple) appearing at the phase detector output that is then applied to the VCO control input. This frequency modulates the VCO and produces FM sidebands commonly called "reference spurs".
The design of this block can be dominated by either of these considerations, or can be a complex process juggling the interactions of the two. The typical trade-off of increasing the bandwidth is degraded stability. Conversely, the tradeoff of extra damping for better stability is reduced speed and increased settling time. Often the phase-noise is also affected. [ 13 ]
All phase-locked loops employ an oscillator element with variable frequency capability. This can be an analog VCO either driven by analog circuitry in the case of an APLL or driven digitally through the use of a digital-to-analog converter as is the case for some DPLL designs. Pure digital oscillators such as a numerically controlled oscillator are used in ADPLLs. [ citation needed ]
PLLs may include a divider between the oscillator and the feedback input to the phase detector to produce a frequency synthesizer . A programmable divider is particularly useful in radio transmitter applications and for computer clocking, since a large number of frequencies can be produced from a single stable, accurate, quartz crystal–controlled reference oscillator (which were expensive before commercial-scale hydrothermal synthesis provided cheap synthetic quartz).
Some PLLs also include a divider between the reference clock and the reference input to the phase detector. If the divider in the feedback path divides by N {\displaystyle N} and the reference input divider divides by M {\displaystyle M} , it allows the PLL to multiply the reference frequency by N / M {\displaystyle N/M} . It might seem simpler to just feed the PLL a lower frequency, but in some cases the reference frequency may be constrained by other issues, and then the reference divider is useful.
Frequency multiplication can also be attained by locking the VCO output to the N th harmonic of the reference signal. Instead of a simple phase detector, the design uses a harmonic mixer (sampling mixer). The harmonic mixer turns the reference signal into an impulse train that is rich in harmonics. [ b ] The VCO output is coarse tuned to be close to one of those harmonics. Consequently, the desired harmonic mixer output (representing the difference between the N harmonic and the VCO output) falls within the loop filter passband.
It should also be noted that the feedback is not limited to a frequency divider. This element can be other elements such as a frequency multiplier, or a mixer. The multiplier will make the VCO output a sub-multiple (rather than a multiple) of the reference frequency. A mixer can translate the VCO frequency by a fixed offset. It may also be a combination of these. For example, a divider following a mixer allows the divider to operate at a much lower frequency than the VCO without a loss in loop gain.
The equations governing a phase-locked loop with an analog multiplier as the phase detector and linear filter may be derived as follows. Let the input to the phase detector be f 1 ( θ 1 ( t ) ) {\displaystyle f_{1}(\theta _{1}(t))} and the output of the VCO is f 2 ( θ 2 ( t ) ) {\displaystyle f_{2}(\theta _{2}(t))} with phases θ 1 ( t ) {\displaystyle \theta _{1}(t)} and θ 2 ( t ) {\displaystyle \theta _{2}(t)} . The functions f 1 ( θ ) {\displaystyle f_{1}(\theta )} and f 2 ( θ ) {\displaystyle f_{2}(\theta )} describe waveforms of signals. Then the output of the phase detector φ ( t ) {\displaystyle \varphi (t)} is given by
The VCO frequency is usually taken as a function of the VCO input g ( t ) {\displaystyle g(t)} as
where g v {\displaystyle g_{v}} is the sensitivity of the VCO and is expressed in Hz / V; ω free {\displaystyle \omega _{\text{free}}} is a free-running frequency of VCO.
The loop filter can be described by a system of linear differential equations
where φ ( t ) {\displaystyle \varphi (t)} is an input of the filter, g ( t ) {\displaystyle g(t)} is an output of the filter, A {\displaystyle A} is n {\displaystyle n} -by- n {\displaystyle n} matrix, x ∈ C n , b ∈ R n , c ∈ C n , {\displaystyle x\in \mathbb {C} ^{n},\quad b\in \mathbb {R} ^{n},\quad c\in \mathbb {C} ^{n},\quad } . x 0 ∈ C n {\displaystyle x_{0}\in \mathbb {C} ^{n}} represents an initial state of the filter. The star symbol is a conjugate transpose .
Hence the following system describes PLL
where θ 0 {\displaystyle \theta _{0}} is an initial phase shift.
Consider the input of PLL f 1 ( θ 1 ( t ) ) {\displaystyle f_{1}(\theta _{1}(t))} and VCO output f 2 ( θ 2 ( t ) ) {\displaystyle f_{2}(\theta _{2}(t))} are high frequency signals. Then for any piecewise differentiable 2 π {\displaystyle 2\pi } -periodic functions f 1 ( θ ) {\displaystyle f_{1}(\theta )} and f 2 ( θ ) {\displaystyle f_{2}(\theta )} there is a function φ ( θ ) {\displaystyle \varphi (\theta )} such that the output G ( t ) {\displaystyle G(t)} of Filter
in phase domain is asymptotically equal (the difference G ( t ) − g ( t ) {\displaystyle G(t)-g(t)} is small with respect to the frequencies) to the output of the Filter in time domain model. [ 19 ] [ 20 ] Here function φ ( θ ) {\displaystyle \varphi (\theta )} is a phase detector characteristic .
Denote by θ Δ ( t ) {\displaystyle \theta _{\Delta }(t)} the phase difference
Then the following dynamical system describes PLL behavior
Here ω Δ = ω 1 − ω free {\displaystyle \omega _{\Delta }=\omega _{1}-\omega _{\text{free}}} ; ω 1 {\displaystyle \omega _{1}} is the frequency of a reference oscillator (we assume that ω free {\displaystyle \omega _{\text{free}}} is constant).
Consider sinusoidal signals
and a simple one-pole RC circuit as a filter. The time-domain model takes the form
PD characteristics for this signals is equal [ 21 ] to
Hence the phase domain model takes the form
This system of equations is equivalent to the equation of mathematical pendulum
Phase locked loops can also be analyzed as control systems by applying the Laplace transform . The loop response can be written as
Where
The loop characteristics can be controlled by inserting different types of loop filters. The simplest filter is a one-pole RC circuit . The loop transfer function in this case is
The loop response becomes:
This is the form of a classic harmonic oscillator . The denominator can be related to that of a second order system:
where ζ {\displaystyle \zeta } is the damping factor and ω n {\displaystyle \omega _{n}} is the natural frequency of the loop.
For the one-pole RC filter,
The loop natural frequency is a measure of the response time of the loop, and the damping factor is a measure of the overshoot and ringing. Ideally, the natural frequency should be high and the damping factor should be near 0.707 (critical damping). With a single pole filter, it is not possible to control the loop frequency and damping factor independently. For the case of critical damping,
A slightly more effective filter, the lag-lead filter includes one pole and one zero. This can be realized with two resistors and one capacitor. The transfer function for this filter is
This filter has two time constants
Substituting above yields the following natural frequency and damping factor
The loop filter components can be calculated independently for a given natural frequency and damping factor
Real world loop filter design can be much more complex e.g. using higher order filters to reduce various types or source of phase noise. (See the D Banerjee ref below)
Digital phase locked loops can be implemented in hardware, using integrated circuits such as a CMOS 4046. However, with microcontrollers becoming faster, it may make sense to implement a phase locked loop in software for applications that do not require locking onto signals in the MHz range or faster, such as precisely controlling motor speeds. Software implementation has several advantages including easy customization of the feedback loop including changing the multiplication or division ratio between the signal being tracked and the output oscillator. Furthermore, a software implementation is useful to understand and experiment with. As an example of a phase-locked loop implemented using a phase frequency detector is presented in MATLAB, as this type of phase detector is robust and easy to implement.
In this example, an array tracksig is assumed to contain a reference signal to be tracked. The oscillator is implemented by a counter, with the most significant bit of the counter indicating the on/off status of the oscillator. This code simulates the two D-type flip-flops that comprise a phase-frequency comparator. When either the reference or signal has a positive edge, the corresponding flip-flop switches high. Once both reference and signal is high, both flip-flops are reset. Which flip-flop is high determines at that instant whether the reference or signal leads the other. The error signal is the difference between these two flip-flop values. The pole-zero filter is implemented by adding the error signal and its derivative to the filtered error signal. This in turn is integrated to find the oscillator frequency.
In practice, one would likely insert other operations into the feedback of this phase-locked loop. For example, if the phase locked loop were to implement a frequency multiplier, the oscillator signal could be divided in frequency before it is compared to the reference signal. | https://en.wikipedia.org/wiki/Phase-locked_loop |
The terms hold-in range , pull-in range (acquisition range), and lock-in range are widely used by engineers for the concepts of frequency deviation ranges within which phase-locked loop -based circuits can achieve lock under various additional conditions.
In the classic books on phase-locked loops , [ 1 ] [ 2 ] published in 1966, such concepts as hold-in, pull-in, lock-in, and other frequency ranges for which PLL can achieve lock, were introduced. They are widely used nowadays (see, e.g. contemporary engineering literature [ 3 ] [ 4 ] and other publications). Usually in engineering literature only non-strict definitions are given for these concepts.
Many years of using definitions based on the above concepts has led to the advice given in a handbook on synchronization and communications, namely to check the definitions carefully before using them. [ 5 ] Later some rigorous mathematical definitions were given in. [ 6 ] [ 7 ]
In the 1st edition of his well-known work, Phaselock Techniques , Floyd M. Gardner introduced a lock-in concept: [ 8 ] If, for some reason, the frequency difference between input and VCO is less than the loop bandwidth, the loop will lock up almost instantaneously without slipping cycles. The maximum frequency difference for which this fast acquisition is possible is called the lock-in frequency . His notion of the lock-in frequency and corresponding definition of the lock-in range have become popular and nowadays are given in various engineering publications. However, since even for zero frequency difference there may exist initial states of loop such that cycle slipping may take place during the acquisition process, the consideration of initial state of the loop is of utmost importance for the cycle slip analysis and, therefore, Gardner’s concept of lock-in frequency lacked rigor and required clarification.
In the 2nd edition of his book, Gardner stated: "there is no natural way to define exactly any unique lock-in frequency", and he wrote that "despite its vague reality, lock-in range is a useful concept". [ 9 ] [ 10 ]
Note that in general ω Δ free ≠ ω Δ ( 0 ) {\displaystyle \omega _{\Delta }^{\text{free}}\neq \omega _{\Delta }(0)} , because ω Δ ( 0 ) {\displaystyle \omega _{\Delta }(0)} also depends on initial input of VCO.
Definition of locked state
In a locked state: 1) the phase error fluctuations are small, the frequency error is small; 2) PLL approaches the same locked state after small perturbations of the phases and filter state.
Definition of hold-in range.
A largest interval of frequency deviations 0 ≤ | ω Δ free | ≤ ω h {\displaystyle 0\leq \left|\omega _{\Delta }^{\text{free}}\right|\leq \omega _{h}} for which a locked state exists is called a hold-in range , and ω h {\displaystyle \omega _{h}} is called hold-in frequency. [ 6 ] [ 7 ]
Value of frequency deviation belongs to the hold-in range if the loop re-achieves locked state after small perturbations of the filter's state, the phases and frequencies of VCO and the input signals. This effect is also called steady-state stability . In addition, for a frequency deviation within the hold-in range, after a small changes in input frequency loop re-achieves a new locked state (tracking process).
Also called acquisition range, capture range. [ 11 ]
Assume that the loop power supply is initially switched off and then at t = 0 {\displaystyle t=0} the power is switched on, and assume that the initial frequency difference is sufficiently large. The loop may not lock within one beat note, but the VCO frequency will be slowly tuned toward the reference frequency (acquisition process). This effect is also called a transient stability. The pull-in range is used to name such frequency deviations that make the acquisition process possible (see, for example, explanations in Gardner (1966 , p. 40) and Best (2007 , p. 61)).
Definition of pull-in range.
Pull-in range is a largest interval of frequency deviations 0 ≤ | ω Δ free | ≤ ω p {\displaystyle 0\leq \left|\omega _{\Delta }^{\text{free}}\right|\leq \omega _{p}} such that PLL acquires lock for arbitrary initial phase, initial frequency, and filter state. Here ω p {\displaystyle \omega _{p}} is called pull-in frequency. [ 6 ] [ 7 ] [ 12 ]
The difficulties of reliable numerical analysis of the pull-in range may be caused by the presence of hidden attractors in dynamical model of the circuit. [ 13 ] [ 14 ] [ 15 ]
Assume that PLL is initially locked. Then the reference frequency ω 1 {\displaystyle \omega _{1}} is suddenly changed in an abrupt manner(step change). Pull-in range guarantees that PLL will eventually synchronize, however this process may take a long time. Such long acquisition process is called cycle slipping.
If difference between initial and final phase deviation is larger than 2 π {\displaystyle 2\pi } , we say that cycle slipping takes place.
Here, sometimes, the limit of the difference or the maximum of the difference is considered [ 16 ]
Definition of lock-in range.
If the loop is in a locked state, then after an abrupt change of ω Δ free {\displaystyle \omega _{\Delta }^{\text{free}}} free within a lock-in range | ω Δ free | ≤ ω ℓ {\displaystyle \left|\omega _{\Delta }^{\text{free}}\right|\leq \omega _{\ell }} , the PLL acquires lock without cycle slipping. Here ω ℓ {\displaystyle \omega _{\ell }} is called lock-in frequency. [ 6 ] [ 7 ] [ 17 ] | https://en.wikipedia.org/wiki/Phase-locked_loop_range |
In the late 20th and early 21st century, there has been a global movement towards the phase-out of polystyrene foam as a single use plastic (SUP). Early bans of polystyrene foam intended to eliminate ozone-depleting chlorofluorocarbons (CFCs), formerly a major component.
Expanded polystyrene, often termed Styrofoam , is a contributor of microplastics from both land and maritime activities. Polystyrene is not biodegradeable but is susceptible to photo-oxidation , and degrades slowly in the ocean as microplastic marine debris . Animals do not recognize polystyrene foam as an artificial material, may mistake it for food, and show toxic effects after substantial exposure.
Full or partial bans of expanded and polystyrene foam commonly target disposable food packaging . Such bans have been enacted through national legislation globally, and also at sub-national or local levels in many countries.
China banned expanded polystyrene takeout/takeaway containers and tableware in 1999, but later revoked the policy in 2013 amidst industry lobbying. [ 1 ] Haiti banned foam food containers in 2012 to reduce waste in canals and roadside drains. In 2019, the European Parliament voted 560 to 35 to ban all food and beverage containers made from expanded polystyrene throughout the European Union member states. [ 2 ] Canada amended its 'Canadian Environmental Protection Act, 1999' in 2022 to prohibit foodservice ware made of expanded or extruded polystyrene, and also polyvinyl chloride, black colored plastics, or oxo-degraded plastics. [ 3 ]
In Australia , over 97% of the population live in an area that bans expanded polystyrene. Between 2021-2023, the Australian Capital Territory, New South Wales, Queensland, South Australia, Victoria, and Western Australia enacted bans. [ 59 ] [ 60 ] [ 61 ] [ 62 ] [ 63 ] [ 64 ]
Nigeria ' s states of Lagos and Abia introduced bans in January 2024, with an initial transition period of three weeks. [ 65 ] The state of Oyo introduced a ban in March 2024. [ 66 ]
Municipal bans in the Philippines are in effect in Bailen, [ 67 ] Boracay, [ 68 ] Caloocan, [ 69 ] Cordova, [ 70 ] El Nido, [ 71 ] Las Piñas, [ 72 ] Makati, [ 73 ] Mandaluyong City, [ 74 ] Muntinlupa, [ 75 ] Quezon City, [ 76 ] and Tacloban. [ 77 ]
In the United Arab Emirates , the municipal government of Dubai announced a ban affecting polystyrene in 2025, and all single-use plastic food containers in 2026. [ 78 ]
As of February 2025, 11 U.S. states and two territories have passed statewide legislation to explicitly ban polystyrene foam:
In Hawaii , a de facto ban is in effect after every county enacted polystyrene bans except state-administered Kalawao County . Bans in Hawaii County took effect July 2019, followed by Kauai County , Maui County , and Honolulu County in 2022. [ 91 ] [ 92 ] [ 93 ] Maui separately banned polystyrene foam coolers, and the sale or rental of disposable bodyboards in 2022. [ 94 ] [ 95 ]
In California , polystyrene is de jure banned as of January 2025, resulting from the state's legislature passing SB54 in June 2022 as the Plastic Pollution Prevention and Packaging Producer Responsibility Act. [ 96 ] The law codifies extended producer responsibility (EPR) requirements for plastics, including a requirement that polystyrene be banned if recycling rates do not reach 25% by 2025. Recycling rates averaged 6% at passage, leading some to call the law a 'de facto ban', anticipating an inability to comply. [ 97 ] [ 98 ] As of February 2025, CalRecycle, the state agency regulating SB 54, had not announced enforcement mechanisms to begin implementation of the law. [ 99 ]
Local bans have been enacted elsewhere, including in many large and small cities within the US:
As of February 2025, proposed legislation banning polystyrene has previously passed at least one legislative chamber in two states and one territory.
In Connecticut , SB 118 passed the state Senate in April 2022, but died when the session ended. [ 157 ] In Illinois , the state House passed HB2376 on March 21, 2023, which also died when the session ended. [ 158 ] The territory of the Northern Mariana Islands passed HB21-89 in its House of Representatives in 2020. [ 159 ]
In September 2021, Florida introduced a proposed phaseout of polystyrene foam food packaging . [ 160 ] Commissioner of Agriculture Nikki Fried , whose Florida Department of Agriculture and Consumer Services oversees food safety in Florida, proposed a rule to phase out polystyrene in 40,000 grocery stores, food markets, convenience stores, and gas stations that the agency regulates in Florida. The Florida Legislature will consider the proposed rule in 2022. [ 161 ] | https://en.wikipedia.org/wiki/Phase-out_of_polystyrene_foam |
Phase-space representation of quantum state vectors is a formulation of quantum mechanics elaborating the phase-space formulation with a Hilbert space. It "is obtained within the framework of the relative-state formulation. For this purpose, the Hilbert space of a quantum system is enlarged by introducing an auxiliary quantum system. Relative-position state and relative-momentum state are defined in the extended Hilbert space of the composite quantum system and expressions of basic operators such as canonical position and momentum operators, acting on these states, are obtained." [ 1 ] Thus, it is possible to assign a meaning to the wave function in phase space, ψ ( x , p , t ) {\displaystyle \psi (x,p,t)} , as a quasiamplitude, associated to a quasiprobability distribution .
The first wave-function approach of quantum mechanics in phase space was introduced by Torres-Vega and Frederick in 1990 [ 2 ] (also see [ 3 ] [ 4 ] [ 5 ] ). It is based on a generalised Husimi distribution .
In 2004 Oliveira et al. developed a new wave-function formalism in phase space where the wave-function is associated to the Wigner quasiprobability distribution by means of the Moyal product . [ 6 ] An advantage might be that off-diagonal Wigner functions used in superpositions are treated in an intuitive way, ψ 1 ⋆ ψ 2 {\displaystyle \psi _{1}\star \psi _{2}} , also gauge theories are treated in an operator form. [ 7 ] [ 8 ]
Instead of thinking in terms multiplication of function using the star product, we can shift to think in terms of operators acting in functions in phase space.
Where for the Torres-Vega and Frederick approach the phase space operators are
with
and
And Oliveira's approach the phase space operators are
with
In the general case [ 9 ] [ 1 ]
and
with γ β − α δ = 1 {\displaystyle \gamma \beta -\alpha \delta =1} , where α {\displaystyle \alpha } , β {\displaystyle \beta } , γ {\displaystyle \gamma } and δ {\displaystyle \delta } are constants.
These operators satisfy the uncertainty principle :
To associate the Hilbert space, H {\displaystyle {\mathcal {H}}} , with the phase space Γ {\displaystyle \Gamma } , we will consider the set of complex functions of integrable square, ψ ( x , p ) {\displaystyle \psi (x,p)} in Γ {\displaystyle \Gamma } , such that
Then we can write ψ ( x , p ) = ⟨ x , p | ψ ⟩ {\displaystyle \psi (x,p)=\langle x,p|\psi \rangle } , with
where ⟨ ψ | {\displaystyle \langle \psi |} is the dual vector of | ψ ⟩ {\displaystyle |\psi \rangle } . This symplectic Hilbert space is denoted by H ( Γ ) {\displaystyle {\mathcal {H}}(\Gamma )} .
An association with the Schrödinger wavefunction can be made by
ψ ( q , p ) = e − i x p / 2 ℏ ∫ g ( x ′ ) ϕ ( x + x ′ ) e − ( i / ℏ ) p x ′ d x ′ {\displaystyle \psi (q,p)=e^{-ixp/2\hbar }\int g(x')\phi (x+x')e^{-(i/\hbar )px'}dx'} ,
letting g ( x ′ ) = ϕ ∗ ( − z 2 ) {\displaystyle g(x')=\phi ^{*}(-{\frac {z}{2}})} , we have
ψ ( q , p ) = ∫ ϕ ( x − z 2 ) ϕ ( x + z 2 ) e − ( i / ℏ ) p z d z {\displaystyle \psi (q,p)=\int \phi (x-{\frac {z}{2}})\phi (x+{\frac {z}{2}})e^{-(i/\hbar )pz}dz} .
Then ψ ( x , p ) ∝ W ( q , p ) {\displaystyle \psi (x,p)\propto W(q,p)} . [ 10 ]
With the operators of position and momentum a Schrödinger picture is developed in phase space
The Torres-Vega–Frederick distribution is
Thus, it is now, with aid of the star product possible to construct a Schrödinger picture in phase space for ψ ( x , p ) {\displaystyle \psi (x,p)}
deriving both side by t {\displaystyle t} , we have
therefore, the above equation has the same role of Schrödinger equation in usual quantum mechanics .
To show that W ( x , p , t ) = ψ ( x , p , t ) ⋆ ψ † ( x , p , t ) {\displaystyle W(x,p,t)=\psi (x,p,t)\star \psi ^{\dagger }(x,p,t)} , we take the 'Schrödinger equation' in phase space and 'star-multiply' by the right for ψ † ( x , p , t ) {\displaystyle \psi ^{\dagger }(x,p,t)}
where H {\displaystyle H} is the classical Hamiltonian of the system. And taking the complex conjugate
subtracting both equations we get
which is the time evolution of Wigner function, for this reason ψ {\displaystyle \psi } is sometimes called quasiamplitude of probability. The ⋆ {\displaystyle \star } -genvalue is given by the time independent equation
Star-multiplying for ψ † ( x , p , t ) {\displaystyle \psi ^{\dagger }(x,p,t)} on the right, we obtain
Therefore, the static Wigner distribution function is a ⋆ {\displaystyle \star } -genfunction of the ⋆ {\displaystyle \star } -genvalue equation, a result well known in the usual phase-space formulation of quantum mechanics. [ 11 ] [ 12 ]
In the case where ψ ( q , p ) ∝ W ( q , p ) {\displaystyle \psi (q,p)\propto W(q,p)} , worked in the beginning of the section, the Oliveira approach and phase-space formulation are indistinguishable, at least for pure states. [ 10 ]
As it was states before, the first wave-function formulation of quantum mechanics was developed by Torres-Vega and Frederick, [ 2 ] its phase-space operators are given by
and
This operators are obtained transforming the operators x ¯ TV = x + i ℏ ∂ ∂ p {\displaystyle {\bar {x}}_{{}_{\text{TV}}}=x+i\hbar {\frac {\partial }{\partial p}}} and p ¯ TV = − i ℏ ∂ ∂ q {\displaystyle {\bar {p}}_{{}_{\text{TV}}}=-i\hbar {\frac {\partial }{\partial q}}} (developed in the same article) as
and
where U = exp ( i x p 2 ℏ ) {\displaystyle U=\exp(i{\frac {x\,p}{2\hbar }})} .
This representation is some times associated with the Husimi distribution [ 2 ] [ 13 ] and it was shown to coincides with the totality of coherent-state representations for the Heisenberg–Weyl group. [ 14 ]
The Wigner quasiamplitude, ψ {\displaystyle \psi } , and Torres-Vega–Frederick wave-function, ψ TV {\displaystyle \psi _{{}_{\text{TV}}}} , are related by
where x ^ w = x + i ℏ 2 ∂ p {\displaystyle {\widehat {x}}_{w}=x+{\frac {i\hbar }{2}}\partial _{p}} and p ^ w = p − i ℏ 2 ∂ x {\displaystyle {\widehat {p}}_{w}=p-{\frac {i\hbar }{2}}\partial _{x}} . [ 13 ] | https://en.wikipedia.org/wiki/Phase-space_wavefunctions |
In the physical sciences , a phase is a region of material that is chemically uniform, physically distinct, and (often) mechanically separable. In a system consisting of ice and water in a glass jar, the ice cubes are one phase, the water is a second phase, and the humid air is a third phase over the ice and water. The glass of the jar is a different material, in its own separate phase. (See state of matter § Glass .)
More precisely, a phase is a region of space (a thermodynamic system ), throughout which all physical properties of a material are essentially uniform. [ 1 ] [ 2 ] : 86 [ 3 ] : 3 Examples of physical properties include density , index of refraction , magnetization and chemical composition.
The term phase is sometimes used as a synonym for state of matter , but there can be several immiscible phases of the same state of matter (as where oil and water separate into distinct phases, both in the liquid state). It is also sometimes used to refer to the equilibrium states shown on a phase diagram , described in terms of state variables such as pressure and temperature and demarcated by phase boundaries . (Phase boundaries relate to changes in the organization of matter, including for example a subtle change within the solid state from one crystal structure to another, as well as state-changes such as between solid and liquid.) These two usages are not commensurate with the formal definition given above and the intended meaning must be determined in part from the context in which the term is used.
Distinct phases may be described as different states of matter such as gas , liquid , solid , plasma or Bose–Einstein condensate . Useful mesophases between solid and liquid form other states of matter.
Distinct phases may also exist within a given state of matter. As shown in the diagram for iron alloys, several phases exist for both the solid and liquid states. Phases may also be differentiated based on solubility as in polar (hydrophilic) or non-polar (hydrophobic). A mixture of water (a polar liquid) and oil (a non-polar liquid) will spontaneously separate into two phases. Water has a very low solubility (is insoluble) in oil, and oil has a low solubility in water. Solubility is the maximum amount of a solute that can dissolve in a solvent before the solute ceases to dissolve and remains in a separate phase. A mixture can separate into more than two liquid phases and the concept of phase separation extends to solids, i.e., solids can form solid solutions or crystallize into distinct crystal phases. Metal pairs that are mutually soluble can form alloys , whereas metal pairs that are mutually insoluble cannot.
As many as eight immiscible liquid phases have been observed. [ a ] Mutually immiscible liquid phases are formed from water (aqueous phase), hydrophobic organic solvents, perfluorocarbons ( fluorous phase ), silicones, several different metals, and also from molten phosphorus. Not all organic solvents are completely miscible, e.g. a mixture of ethylene glycol and toluene may separate into two distinct organic phases. [ b ]
Phases do not need to macroscopically separate spontaneously. Emulsions and colloids are examples of immiscible phase pair combinations that do not physically separate.
Left to equilibration, many compositions will form a uniform single phase, but depending on the temperature and pressure even a single substance may separate into two or more distinct phases. Within each phase, the properties are uniform but between the two phases properties differ.
Water in a closed jar with an air space over it forms a two-phase system. Most of the water is in the liquid phase, where it is held by the mutual attraction of water molecules. Even at equilibrium molecules are constantly in motion and, once in a while, a molecule in the liquid phase gains enough kinetic energy to break away from the liquid phase and enter the gas phase. Likewise, every once in a while a vapor molecule collides with the liquid surface and condenses into the liquid. At equilibrium, evaporation and condensation processes exactly balance and there is no net change in the volume of either phase.
At room temperature and pressure, the water jar reaches equilibrium when the air over the water has a humidity of about 3%. This percentage increases as the temperature goes up. At 100 °C and atmospheric pressure, equilibrium is not reached until the air is 100% water. If the liquid is heated a little over 100 °C, the transition from liquid to gas will occur not only at the surface but throughout the liquid volume: the water boils.
For a given composition, only certain phases are possible at a given temperature and pressure. The number and type of phases that will form is hard to predict and is usually determined by experiment. The results of such experiments can be plotted in phase diagrams .
The phase diagram shown here is for a single component system. In this simple system, phases that are possible, depend only on pressure and temperature . The markings show points where two or more phases can co-exist in equilibrium. At temperatures and pressures away from the markings, there will be only one phase at equilibrium.
In the diagram, the blue line marking the boundary between liquid and gas does not continue indefinitely, but terminates at a point called the critical point . As the temperature and pressure approach the critical point, the properties of the liquid and gas become progressively more similar. At the critical point, the liquid and gas become indistinguishable. Above the critical point, there are no longer separate liquid and gas phases: there is only a generic fluid phase referred to as a supercritical fluid . In water, the critical point occurs at around 647 K (374 °C or 705 °F) and 22.064 MPa .
An unusual feature of the water phase diagram is that the solid–liquid phase line (illustrated by the dotted green line) has a negative slope. For most substances, the slope is positive as exemplified by the dark green line. This unusual feature of water is related to ice having a lower density than liquid water. Increasing the pressure drives the water into the higher density phase, which causes melting.
Another interesting though not unusual feature of the phase diagram is the point where the solid–liquid phase line meets the liquid–gas phase line. The intersection is referred to as the triple point . At the triple point, all three phases can coexist.
Experimentally, phase lines are relatively easy to map due to the interdependence of temperature and pressure that develops when multiple phases form. Gibbs' phase rule suggests that different phases are completely determined by these variables. Consider a test apparatus consisting of a closed and well-insulated cylinder equipped with a piston. By controlling the temperature and the pressure, the system can be brought to any point on the phase diagram. From a point in the solid stability region (left side of the diagram), increasing the temperature of the system would bring it into the region where a liquid or a gas is the equilibrium phase (depending on the pressure). If the piston is slowly lowered, the system will trace a curve of increasing temperature and pressure within the gas region of the phase diagram. At the point where gas begins to condense to liquid, the direction of the temperature and pressure curve will abruptly change to trace along the phase line until all of the water has condensed.
Between two phases in equilibrium there is a narrow region where the properties are not that of either phase. Although this region may be very thin, it can have significant and easily observable effects, such as causing a liquid to exhibit surface tension . In mixtures, some components may preferentially move toward the interface . In terms of modeling, describing, or understanding the behavior of a particular system, it may be efficacious to treat the interfacial region as a separate phase.
A single material may have several distinct solid states capable of forming separate phases. Water is a well-known example of such a material. For example, water ice is ordinarily found in the hexagonal form ice I h , but can also exist as the cubic ice I c , the rhombohedral ice II , and many other forms. Polymorphism is the ability of a solid to exist in more than one crystal form. For pure chemical elements, polymorphism is known as allotropy . For example, diamond , graphite , and fullerenes are different allotropes of carbon .
When a substance undergoes a phase transition (changes from one state of matter to another) it usually either takes up or releases energy. For example, when water evaporates, the increase in kinetic energy as the evaporating molecules escape the attractive forces of the liquid is reflected in a decrease in temperature. The energy required to induce the phase transition is taken from the internal thermal energy of the water, which cools the liquid to a lower temperature; hence evaporation is useful for cooling. See Enthalpy of vaporization . The reverse process, condensation, releases heat. The heat energy, or enthalpy, associated with a solid to liquid transition is the enthalpy of fusion and that associated with a solid to gas transition is the enthalpy of sublimation .
While phases of matter are traditionally defined for systems in thermal equilibrium, work on quantum many-body localized (MBL) systems has provided a framework for defining phases out of equilibrium. MBL phases never reach thermal equilibrium, and can allow for new forms of order disallowed in equilibrium via a phenomenon known as localization protected quantum order. The transitions between different MBL phases and between MBL and thermalizing phases are novel dynamical phase transitions whose properties are active areas of research. | https://en.wikipedia.org/wiki/Phase_(matter) |
In physics and mathematics , the phase (symbol φ or ϕ) of a wave or other periodic function F {\displaystyle F} of some real variable t {\displaystyle t} (such as time) is an angle -like quantity representing the fraction of the cycle covered up to t {\displaystyle t} . It is expressed in such a scale that it varies by one full turn as the variable t {\displaystyle t} goes through each period (and F ( t ) {\displaystyle F(t)} goes through each complete cycle). It may be measured in any angular unit such as degrees or radians , thus increasing by 360° or 2 π {\displaystyle 2\pi } as the variable t {\displaystyle t} completes a full period. [ 1 ]
This convention is especially appropriate for a sinusoidal function, since its value at any argument t {\displaystyle t} then can be expressed as φ ( t ) {\displaystyle \varphi (t)} , the sine of the phase, multiplied by some factor (the amplitude of the sinusoid). (The cosine may be used instead of sine, depending on where one considers each period to start.)
Usually, whole turns are ignored when expressing the phase; so that φ ( t ) {\displaystyle \varphi (t)} is also a periodic function, with the same period as F {\displaystyle F} , that repeatedly scans the same range of angles as t {\displaystyle t} goes through each period. Then, F {\displaystyle F} is said to be "at the same phase" at two argument values t 1 {\displaystyle t_{1}} and t 2 {\displaystyle t_{2}} (that is, φ ( t 1 ) = φ ( t 2 ) {\displaystyle \varphi (t_{1})=\varphi (t_{2})} ) if the difference between them is a whole number of periods.
The numeric value of the phase φ ( t ) {\displaystyle \varphi (t)} depends on the arbitrary choice of the start of each period, and on the interval of angles that each period is to be mapped to.
The term "phase" is also used when comparing a periodic function F {\displaystyle F} with a shifted version G {\displaystyle G} of it. If the shift in t {\displaystyle t} is expressed as a fraction of the period, and then scaled to an angle φ {\displaystyle \varphi } spanning a whole turn, one gets the phase shift , phase offset , or phase difference of G {\displaystyle G} relative to F {\displaystyle F} . If F {\displaystyle F} is a "canonical" function for a class of signals, like sin ( t ) {\displaystyle \sin(t)} is for all sinusoidal signals, then φ {\displaystyle \varphi } is called the initial phase of G {\displaystyle G} .
Let the signal F {\displaystyle F} be a periodic function of one real variable, and T {\displaystyle T} be its period (that is, the smallest positive real number such that F ( t + T ) = F ( t ) {\displaystyle F(t+T)=F(t)} for all t {\displaystyle t} ). Then the phase of F {\displaystyle F} at any argument t {\displaystyle t} is φ ( t ) = 2 π [ [ t − t 0 T ] ] {\displaystyle \varphi (t)=2\pi \left[\!\!\left[{\frac {t-t_{0}}{T}}\right]\!\!\right]}
Here [ [ ⋅ ] ] {\displaystyle [\![\,\cdot \,]\!]\!\,} denotes the fractional part of a real number, discarding its integer part; that is, [ [ x ] ] = x − ⌊ x ⌋ {\displaystyle [\![x]\!]=x-\left\lfloor x\right\rfloor \!\,} ; and t 0 {\displaystyle t_{0}} is an arbitrary "origin" value of the argument, that one considers to be the beginning of a cycle.
This concept can be visualized by imagining a clock with a hand that turns at constant speed, making a full turn every T {\displaystyle T} seconds, and is pointing straight up at time t 0 {\displaystyle t_{0}} . The phase φ ( t ) {\displaystyle \varphi (t)} is then the angle from the 12:00 position to the current position of the hand, at time t {\displaystyle t} , measured clockwise .
The phase concept is most useful when the origin t 0 {\displaystyle t_{0}} is chosen based on features of F {\displaystyle F} . For example, for a sinusoid, a convenient choice is any t {\displaystyle t} where the function's value changes from zero to positive.
The formula above gives the phase as an angle in radians between 0 and 2 π {\displaystyle 2\pi } . To get the phase as an angle between − π {\displaystyle -\pi } and + π {\displaystyle +\pi } , one uses instead φ ( t ) = 2 π ( [ [ t − t 0 T + 1 2 ] ] − 1 2 ) {\displaystyle \varphi (t)=2\pi \left(\left[\!\!\left[{\frac {t-t_{0}}{T}}+{\frac {1}{2}}\right]\!\!\right]-{\frac {1}{2}}\right)}
The phase expressed in degrees (from 0° to 360°, or from −180° to +180°) is defined the same way, except with "360°" in place of "2π".
With any of the above definitions, the phase φ ( t ) {\displaystyle \varphi (t)} of a periodic signal is periodic too, with the same period T {\displaystyle T} : φ ( t + T ) = φ ( t ) for all t . {\displaystyle \varphi (t+T)=\varphi (t)\quad \quad {\text{ for all }}t.}
The phase is zero at the start of each period; that is φ ( t 0 + k T ) = 0 for any integer k . {\displaystyle \varphi (t_{0}+kT)=0\quad \quad {\text{ for any integer }}k.}
Moreover, for any given choice of the origin t 0 {\displaystyle t_{0}} , the value of the signal F {\displaystyle F} for any argument t {\displaystyle t} depends only on its phase at t {\displaystyle t} . Namely, one can write F ( t ) = f ( φ ( t ) ) {\displaystyle F(t)=f(\varphi (t))} , where f {\displaystyle f} is a function of an angle, defined only for a single full turn, that describes the variation of F {\displaystyle F} as t {\displaystyle t} ranges over a single period.
In fact, every periodic signal F {\displaystyle F} with a specific waveform can be expressed as F ( t ) = A w ( φ ( t ) ) {\displaystyle F(t)=A\,w(\varphi (t))} where w {\displaystyle w} is a "canonical" function of a phase angle in 0 to 2π, that describes just one cycle of that waveform; and A {\displaystyle A} is a scaling factor for the amplitude. (This claim assumes that the starting time t 0 {\displaystyle t_{0}} chosen to compute the phase of F {\displaystyle F} corresponds to argument 0 of w {\displaystyle w} .)
Since phases are angles, any whole full turns should usually be ignored when performing arithmetic operations on them. That is, the sum and difference of two phases (in degrees) should be computed by the formulas 360 [ [ α + β 360 ] ] and 360 [ [ α − β 360 ] ] {\displaystyle 360\,\left[\!\!\left[{\frac {\alpha +\beta }{360}}\right]\!\!\right]\quad \quad {\text{ and }}\quad \quad 360\,\left[\!\!\left[{\frac {\alpha -\beta }{360}}\right]\!\!\right]} respectively. Thus, for example, the sum of phase angles 190° + 200° is 30° ( 190 + 200 = 390 , minus one full turn), and subtracting 50° from 30° gives a phase of 340° ( 30 − 50 = −20 , plus one full turn).
Similar formulas hold for radians, with 2 π {\displaystyle 2\pi } instead of 360.
The difference φ ( t ) = φ G ( t ) − φ F ( t ) {\displaystyle \varphi (t)=\varphi _{G}(t)-\varphi _{F}(t)} between the phases of two periodic signals F {\displaystyle F} and G {\displaystyle G} is called the phase difference or phase shift of G {\displaystyle G} relative to F {\displaystyle F} . [ 1 ] At values of t {\displaystyle t} when the difference is zero, the two signals are said to be in phase; otherwise, they are out of phase with each other.
In the clock analogy, each signal is represented by a hand (or pointer) of the same clock, both turning at constant but possibly different speeds. The phase difference is then the angle between the two hands, measured clockwise.
The phase difference is particularly important when two signals are added together by a physical process, such as two periodic sound waves emitted by two sources and recorded together by a microphone. This is usually the case in linear systems, when the superposition principle holds.
For arguments t {\displaystyle t} when the phase difference is zero, the two signals will have the same sign and will be reinforcing each other. One says that constructive interference is occurring. At arguments t {\displaystyle t} when the phases are different, the value of the sum depends on the waveform.
For sinusoidal signals, when the phase difference φ ( t ) {\displaystyle \varphi (t)} is 180° ( π {\displaystyle \pi } radians), one says that the phases are opposite , and that the signals are in antiphase . Then the signals have opposite signs, and destructive interference occurs. Conversely, a phase reversal or phase inversion implies a 180-degree phase shift. [ 2 ]
When the phase difference φ ( t ) {\displaystyle \varphi (t)} is a quarter of turn (a right angle, +90° = π/2 or −90° = 270° = −π/2 = 3π/2 ), sinusoidal signals are sometimes said to be in quadrature , e.g., in-phase and quadrature components of a composite signal or even different signals (e.g., voltage and current).
If the frequencies are different, the phase difference φ ( t ) {\displaystyle \varphi (t)} increases linearly with the argument t {\displaystyle t} . The periodic changes from reinforcement and opposition cause a phenomenon called beating .
The phase difference is especially important when comparing a periodic signal F {\displaystyle F} with a shifted and possibly scaled version G {\displaystyle G} of it. That is, suppose that G ( t ) = α F ( t + τ ) {\displaystyle G(t)=\alpha \,F(t+\tau )} for some constants α , τ {\displaystyle \alpha ,\tau } and all t {\displaystyle t} . Suppose also that the origin for computing the phase of G {\displaystyle G} has been shifted too. In that case, the phase difference φ {\displaystyle \varphi } is a constant (independent of t {\displaystyle t} ), called the 'phase shift' or 'phase offset' of G {\displaystyle G} relative to F {\displaystyle F} . In the clock analogy, this situation corresponds to the two hands turning at the same speed, so that the angle between them is constant.
In this case, the phase shift is simply the argument shift τ {\displaystyle \tau } , expressed as a fraction of the common period T {\displaystyle T} (in terms of the modulo operation ) of the two signals and then scaled to a full turn: φ = 2 π [ [ τ T ] ] . {\displaystyle \varphi =2\pi \left[\!\!\left[{\frac {\tau }{T}}\right]\!\!\right].}
If F {\displaystyle F} is a "canonical" representative for a class of signals, like sin ( t ) {\displaystyle \sin(t)} is for all sinusoidal signals, then the phase shift φ {\displaystyle \varphi } called simply the initial phase of G {\displaystyle G} .
Therefore, when two periodic signals have the same frequency, they are always in phase, or always out of phase. Physically, this situation commonly occurs, for many reasons. For example, the two signals may be a periodic soundwave recorded by two microphones at separate locations. Or, conversely, they may be periodic soundwaves created by two separate speakers from the same electrical signal, and recorded by a single microphone. They may be a radio signal that reaches the receiving antenna in a straight line, and a copy of it that was reflected off a large building nearby.
A well-known example of phase difference is the length of shadows seen at different points of Earth. To a first approximation, if F ( t ) {\displaystyle F(t)} is the length seen at time t {\displaystyle t} at one spot, and G {\displaystyle G} is the length seen at the same time at a longitude 30° west of that point, then the phase difference between the two signals will be 30° (assuming that, in each signal, each period starts when the shadow is shortest).
For sinusoidal signals (and a few other waveforms, like square or symmetric triangular), a phase shift of 180° is equivalent to a phase shift of 0° with negation of the amplitude. When two signals with these waveforms, same period, and opposite phases are added together, the sum F + G {\displaystyle F+G} is either identically zero, or is a sinusoidal signal with the same period and phase, whose amplitude is the difference of the original amplitudes.
The phase shift of the co-sine function relative to the sine function is +90°. It follows that, for two sinusoidal signals F {\displaystyle F} and G {\displaystyle G} with same frequency and amplitudes A {\displaystyle A} and B {\displaystyle B} , and G {\displaystyle G} has phase shift +90° relative to F {\displaystyle F} , the sum F + G {\displaystyle F+G} is a sinusoidal signal with the same frequency, with amplitude C {\displaystyle C} and phase shift − 90 ∘ < φ < + 90 ∘ {\displaystyle -90^{\circ }<\varphi <+90^{\circ }} from F {\displaystyle F} , such that C = A 2 + B 2 and sin ( φ ) = B / C . {\displaystyle C={\sqrt {A^{2}+B^{2}}}\quad \quad {\text{ and }}\quad \quad \sin(\varphi )=B/C.}
A real-world example of a sonic phase difference occurs in the warble of a Native American flute . The amplitude of different harmonic components of same long-held note on the flute come into dominance at different points in the phase cycle. The phase difference between the different harmonics can be observed on a spectrogram of the sound of a warbling flute. [ 4 ]
Phase comparison is a comparison of the phase of two waveforms, usually of the same nominal frequency. In time and frequency, the purpose of a phase comparison is generally to determine the frequency offset (difference between signal cycles) with respect to a reference. [ 3 ]
A phase comparison can be made by connecting two signals to a two-channel oscilloscope . The oscilloscope will display two sine signals, as shown in the graphic to the right. In the adjacent image, the top sine signal is the test frequency , and the bottom sine signal represents a signal from the reference.
If the two frequencies were exactly the same, their phase relationship would not change and both would appear to be stationary on the oscilloscope display. Since the two frequencies are not exactly the same, the reference appears to be stationary and the test signal moves. By measuring the rate of motion of the test signal, the offset between frequencies can be determined.
Vertical lines have been drawn through the points where each sine signal passes through zero. The bottom of the figure shows bars whose width represents the phase difference between the signals. In this case the phase difference is increasing, indicating that the test signal is lower in frequency than the reference. [ 3 ]
The phase of a simple harmonic oscillation or sinusoidal signal is the value of φ {\textstyle \varphi } in the following functions: x ( t ) = A cos ( 2 π f t + φ ) y ( t ) = A sin ( 2 π f t + φ ) = A cos ( 2 π f t + φ − π 2 ) {\displaystyle {\begin{aligned}x(t)&=A\cos(2\pi ft+\varphi )\\y(t)&=A\sin(2\pi ft+\varphi )=A\cos \left(2\pi ft+\varphi -{\tfrac {\pi }{2}}\right)\end{aligned}}} where A {\textstyle A} , f {\textstyle f} , and φ {\textstyle \varphi } are constant parameters called the amplitude , frequency , and phase of the sinusoid. These signals are periodic with period T = 1 f {\textstyle T={\frac {1}{f}}} , and they are identical except for a displacement of T 4 {\textstyle {\frac {T}{4}}} along the t {\textstyle t} axis. The term phase can refer to several different things: | https://en.wikipedia.org/wiki/Phase_(waves) |
In the United States, an environmental site assessment is a report prepared for a real estate holding that identifies potential or existing environmental contamination liabilities . The analysis, often called an ESA , typically addresses both the underlying land as well as physical improvements to the property. A proportion of contaminated sites are " brownfield sites ." In severe cases, brownfield sites may be added to the National Priorities List where they will be subject to the U.S. Environmental Protection Agency's Superfund program.
The actual sampling of soil, air, groundwater and/or building materials is typically not conducted during a Phase I ESA. The Phase I ESA is generally considered the first step in the process of environmental due diligence . Standards for performing a Phase I site assessment have been promulgated by the US EPA [ 1 ] and are based in part on ASTM in Standard E1527-13. [ 2 ]
If a site is considered contaminated, a Phase II environmental site assessment may be conducted, ASTM test E1903, a more detailed investigation involving chemical analysis for hazardous substances and/or petroleum hydrocarbons. [ 3 ]
As early as the 1970s specific property purchasers in the United States undertook studies resembling current Phase I ESAs, to assess risks of ownership of commercial properties which had a high degree of risk from prior toxic chemical use or disposal. Many times these studies were preparatory to understanding the nature of cleanup costs if the property was being considered for redevelopment or change of land use .
In the United States of America demand increased dramatically for this type of study in the 1980s following judicial decisions related to liability of property owners to effect site cleanup. Interpreting the Comprehensive Environmental Response, Compensation and Liability Act of 1980 (CERCLA), the U.S. courts have held that a buyer, lessor, or lender may be held responsible for remediation of hazardous substance residues, even if a prior owner caused the contamination; performance of a Phase I Environmental Site Assessment, according to the courts' reasoning, creates a safe harbor , known as the 'Innocent Landowner Defense'. The original standard under CERCLA for establishing an innocent landowner defense was based upon the requirement to perform a "all appropriate inquiry" prior to ownership transfer. At such time, engineering firms started performing professional engineering reports under a variety of monikers including, "Environmental Audits", "Property Transfer Screens", "Environmental Due-Diligence Reports" and "Environmental Site Assessments". In 1991, Impact Environmental coined the industry term, “Environmental Site Assessment” to replace the commonly used "Environmental Audit” for property transfer studies. A 1990 Court decision, No. 89-8094 (11th Cir. May 23, 1990), United States v. Fleet Factors Corp. found that a secured creditor can be liable for property contamination under the strict, joint and several liability scheme outlined in CERCLA. As a result of this decision, banks elevated their demands for pre-transfer all appropriate inquiries to hedge against financial risk. Starting in the New York market among banks and regional environmental consulting engineers, the term-of-choice evolved to be the Phase I Environmental Site Assessment.
In 1998 the necessity of performing a Phase I ESA was underscored by congressional action in passing the Superfund Cleanup Acceleration Act of 1998 . [ 4 ] This act requires purchasers of commercial property to perform a Phase I study meeting the specific standard of ASTM E1527: Standard Practice for Environmental Site Assessments: Phase I Environmental Site Assessment Process.
The most recent standard is "Standards and Practices for All Appropriate Inquiries" 40 Code of Federal Regulations, Section 312 [ 1 ] which drew heavily from ASTM E1527-13, which is the ASTM Standard for conducting 'All Appropriate Inquiry' (AAI) for the environmental assessment of a real property. Previous guidances regarding the ASTM E1527 standard were ASTM E1527-97, ASTM E1527-00, and ASTM E1527-05.
Residential property purchasers are only required to conduct a site inspection and chain of title survey.
A variety of reasons for a Phase I study to be performed exist, the most common being: [ 5 ]
Scrutiny of the land includes examination of potential soil contamination , groundwater quality, surface water quality, vapor intrusion, and sometimes issues related to hazardous substance uptake by biota . The examination of a site may include: definition of any chemical residues within structures; identification of possible asbestos containing building materials ; inventory of hazardous substances stored or used on site; assessment of mold and mildew ; and evaluation of other indoor air quality parameters. [ 6 ]
Depending upon precise protocols utilized, there are a number of variations in the scope of a Phase I study. The tasks listed here are common to almost all Phase I ESAs: [ 7 ]
In most cases, the public file searches, historical research and chain-of-title examinations are outsourced to information services that specialize in such activities. Non-Scope Items in a Phase I Environmental Site Assessment can include visual inspections or records review searches for:
Observations of Non-scope Items can be reported as "findings" if requested by the report user, however, these items do not constitute recognized environmental conditions.
Often a multi-disciplinary approach is taken in compiling all the components of a Phase I study, since skills in chemistry , atmospheric physics , geology , microbiology and even botany are frequently required. Many of the preparers are environmental scientists who have been trained to integrate these diverse disciplines. Many states have professional registrations which are applicable to the preparers of Phase I ESAs; for example, the state of California had a registration entitled "California Registered Environmental Assessor Class I or Class II" until July 2012, when it removed this REA certification program due to budget cuts. [ 8 ]
Under ASTM E 1527-13 parameters were set forth as to who is qualified to perform Phase I ESAs. An Environmental Professional is someone with: [ 9 ]
A person not meeting one or more of those qualifications may assist in the conduct of a Phase I ESA if the individual is under the supervision or responsible charge of a person meeting the definition of an Environmental Professional when concluding such activities. [ citation needed ]
Most site assessments are conducted by private companies independent of the owner or potential purchaser of the land. [ citation needed ]
While there are myriad sites that have been analyzed to date within the United States, the following list will serve as examples of the subject matter:
In Japan, with the passage of the 2003 Soil Contamination Countermeasures Law , there is a strong movement to conduct Phase I studies more routinely. At least one jurisdiction in Canada ( Ontario ) now requires the completion of a Phase I prior to the transfer of some types of industrial properties. Some parts of Europe began to conduct Phase I studies on selected properties in the 1990s, but still lack the comprehensive attention given to virtually all major real estate transactions in the USA.
In the United Kingdom contaminated land regulation is outlined in the Environment Act 1995 . The Environment Agency of England and Wales have produced a set of guidance; CLEA a standardized approach to the assessment of land contamination. A Phase 1 Desktop Study is often required in support of a planning application. [ 10 ] These reports must be assembled by a "competent person".
There are several other report types that have some resemblance in name or degree of detail to the Phase I Environmental Site Assessment:
Phase II Environmental Site Assessment is an "intrusive" investigation which collects original samples of soil, groundwater or building materials to analyze for quantitative values of various contaminants. [ 11 ] This investigation is normally undertaken when a Phase I ESA determines a likelihood of site contamination. The most frequent substances tested are petroleum hydrocarbons , heavy metals , pesticides , solvents , asbestos and mold.
Phase III Environmental Site Assessment is an investigation involving remediation of a site. Phase III investigations aim to delineate the physical extent of contamination based on recommendations made in Phase II assessments. Phase III investigations may involve intensive testing, sampling, and monitoring, "fate and transport" studies and other modeling, and the design of feasibility studies for remediation and remedial plans. This study normally involves assessment of alternative cleanup methods, costs and logistics. The associated reportage details the steps taken to perform site cleanup and the follow-up monitoring for residual contaminants.
Limited Phase I Environmental Site Assessment is a truncated Phase I ESA, normally omitting one or more work segments such as the site visit or certain of the file searches. When the field visit component is deleted the study is sometimes called a Transaction Screen .
Environmental Assessment has little to do with the subject of hazardous substance liability, but rather is a study preliminary to an Environmental Impact Statement , which identifies environmental impacts of a land development action and analyzes a broad set of parameters including biodiversity , environmental noise , water pollution , air pollution , traffic , geotechnical risks, visual impacts, public safety issues and also hazardous substance issues.
SBA Phase I Environmental Site Assessment means all properties purchased through the United States Small Business Administration 's 504 Fixed Asset Financing Program require specific and often higher due diligence requirements than regular Real Estate transactions. Due diligence requirements are determined according to the NAICS codes associated with the prior business use of the property. There are 58 specific NAICS codes that require Phase I Investigations. These include, but are not limited to: Funeral Homes, Dry Cleaners, and Gas Stations . The SBA also requires Phase II Environmental Site Assessment to be performed on any Gas Station that has been in operation for more than 5 years. The additional cost to perform this assessment cannot be included in the amount requested in the loan and adds significant costs to the borrower.
Freddie Mac/Fannie Mae Phase I Environmental Site Assessments are two specialized types of Phase I ESAs that are required when a loan is financed through Freddie Mac or Fannie Mae. The scopes of work are based on the ASTM E1527-05 Standard but have specific requirements including the following: the percent and scope of the property inspection; requirements for radon testing; asbestos and lead-based paint testing and operations-and-maintenance (O&M) plans to manage the hazards in place; lead in drinking water; and mold inspection. For condominiums, Fannie Mae requires a Phase I ESA anytime the initial underwriting analysis indicates environmental concerns.
HUD Phase I Environmental Site Assessment The U.S. Department of Housing and Urban Development also requires a Phase I ESA for any condominium under construction that wishes to offer an FHA insured loan to potential buyers. | https://en.wikipedia.org/wiki/Phase_I_environmental_site_assessment |
Phase Transitions and Critical Phenomena is a 20-volume series of books, comprising review articles on phase transitions and critical phenomena , published during 1972-2001. It is "considered the most authoritative series on the topic". [ 1 ]
Volumes 1-6 were edited by Cyril Domb and Melville S. Green , and after Green's death, volumes 7-20 were edited by Domb and Joel Lebowitz . [ 1 ] [ 2 ]
Volume 4 was never published. Volume 5 was published in two volumes, as 5A and 5B.
The first volume was praised for its coherent approach. [ 3 ] While praised for its sound theoretical approach, the first volume remained at considerable distance from being able to explain experimental results in things like structural phase transitions. [ 4 ]
The second volume was praised for being well written, and was suggested as a standard reference. [ 5 ] The third volume was also suggested as an index for researchers. [ 6 ] | https://en.wikipedia.org/wiki/Phase_Transitions_and_Critical_Phenomena |
In observational astronomy , phase angle is the angle between the light incident onto an observed object and the light reflected from the object. In the context of astronomical observations, this is usually the angle Sun -object-observer.
For terrestrial observations, "Sun–object–Earth" is often nearly the same thing as "Sun–object–observer", since the difference depends on the parallax , which in the case of observations of the Moon can be as much as 1°, or two full Moon diameters. With the development of space travel , as well as in hypothetical observations from other points in space, the notion of phase angle became independent of Sun and Earth.
The etymology of the term is related to the notion of planetary phases , since the brightness of an object and its appearance as a "phase" is the function of the phase angle.
The phase angle varies from 0° to 180°. The value of 0° corresponds to the position where the illuminator, the observer, and the object are collinear (all lying along the same line), with the illuminator and the observer on the same side of the object. The value of 180° is the position where the object is between the illuminator and the observer, known as inferior conjunction . Values less than 90° represent backscattering ; values greater than 90° represent forward scattering .
For some objects, such as the Moon (see lunar phases ), Venus and Mercury the phase angle (as seen from the Earth) covers the full 0–180° range. The superior planets cover shorter ranges. For example, for Mars the maximum phase angle is about 45°. For Jupiter, the maximum is 11.1° and for Saturn 6°. [ 1 ]
The brightness of an object is a function of the phase angle, which is generally smooth, except for the so-called opposition spike near 0°, which does not affect gas giants or bodies with pronounced atmospheres , and when the object becomes fainter as the angle approaches 180°. This relationship is referred to as the phase curve . | https://en.wikipedia.org/wiki/Phase_angle_(astronomy) |
In thermal equilibrium , each phase (i.e. liquid , solid etc.) of physical matter comes to an end at a transitional point, or spatial interface , called a phase boundary , due to the immiscibility of the matter with the matter on the other side of the boundary. This immiscibility is due to at least one difference between the two substances' corresponding physical properties. The behavior of phase boundaries has been a developing subject of interest and an active interdisciplinary research field, called interface science , for almost two centuries, due partly to phase boundaries naturally arising in many physical processes, such as the capillarity effect , the growth of grain boundaries , the physics of binary alloys , and the formation of snow flakes .
One of the oldest problems in the area dates back to Lamé and Clapeyron [ 1 ] who studied the freezing of the ground. Their goal was to determine the thickness of solid crust generated by the cooling of a liquid at constant temperature filling the half-space . In 1889, Stefan, while working on the freezing of the ground developed these ideas further and formulated the two-phase model which came to be known as the Stefan Problem . [ 2 ]
The proof for the existence and uniqueness of a solution to the Stefan problem was developed in many stages. Proving the general existence and uniqueness of the solutions in the case of d = 3 {\displaystyle d=3} was solved by Shoshana Kamin .
This thermodynamics -related article is a stub . You can help Wikipedia by expanding it .
This applied mathematics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Phase_boundary |
A phase converter is a device that converts electric power provided as single phase to multiple phase or vice versa. The majority of phase converters are used to produce three-phase electric power from a single-phase source, thus allowing the operation of three-phase equipment at a site that only has single-phase electrical service. Phase converters are used where three-phase service is not available from the utility provider or is too costly to install. A utility provider will generally charge a higher fee for a three-phase service because of the extra equipment, including transformers, metering, and distribution wire required to complete a functional installation.
Three-phase induction motors may operate adequately on an unbalanced supply if not heavily loaded. This allows various imperfect techniques to be used. A single-phase motor can drive a three-phase generator, which will produce a high-quality three-phase source but at a high cost to the longevity of the system. While there are multiple phase conversion systems in place, the most common types are:
A rotary phase converter is a common way to create three-phase power in an area where three-phase utility power is not available or cannot be brought in. A rotary phase converter uses a control panel with a start circuit and run circuit to create power without excessive voltage. A three-phase motor uses a rotating magnet surrounded by three sets of coils to produces the third leg of power within the idler motor. Some rotary phase converters are digitally controlled, enabling them to produce power that can run on voltage-sensitive loads such as a CNC machine, welder, or any other computer-controlled load.
A rotary phase converter does not change the voltage, but it can be paired with a transformer to step the voltage up or down depending on what is needed.
A Digital Phase Converter creates a three-phase power supply from a single-phase supply. A Digital Signal Processor (DSP) is used to control power electronic devices to generate a third leg of voltage, which along with the standard, single-phase voltage from the supply creates a balanced three-phase power supply .
AC power from the utility is converted to DC , then back to AC using insulated-gate bipolar transistors (IGBTs). [ 2 ] This conversion process allows for the generation of the third leg from the existing power supply.
In one type of digital phase converter, the input rectifier consists of IGBTs being used alongside inductors to create the third leg of power. The IGBTs are controlled by software in the DSP to draw current from the single-phase line in a sinusoidal fashion, charging capacitors on a constant-voltage DC bus. Because the incoming current is sinusoidal, there are no significant harmonics generated back onto the line as there are with the rectifiers found in most VFDs . The controlled rectifier input allows power factor correction to take place.
The second half of the digital phase converter consists of IGBTs that draw on the power previously stored in the DC bus to create an AC voltage that is not sinusoidal. It is a pulse-width modulated (PWM) waveform very high in harmonic distortion. This voltage is then passed through an inductor/capacitor filter system that produces a sine-wave voltage with less than 3% total harmonic distortion (standards for computer grade power allow up to 5% THD). By contrast, VFDs generate a PWM voltage that limits their versatility and makes them unsuitable for many applications. Software in the DSP continually monitors and adjusts this generated voltage to produce a balanced three-phase output at all times. It also provides protective functions by shutting down in case of utility over-voltage and under-voltage or a fault. With the ability to adjust to changing conditions and maintain voltage balance, a digital phase converter can safely and efficiently operate virtually any type of three-phase equipment or any number of multiple loads.
Since Digital Phase Converters are solid-state designs, there are little to no moving parts except for cooling fans. In turn, this allows digital phase converters to be fit into small packages and operate between 95% and 98% efficiency. These converters also do not draw power when idling, reducing overall costs and increasing longevity.
In Europe , electricity is normally generated as three-phase AC at 50 hertz . Five European countries: Germany , Austria , Switzerland , Norway and Sweden have standardized on single-phase AC at 15 kV 16 + 2 ⁄ 3 Hz for railway electrification . Phase converters are, therefore, used to change both the phase and the frequency . [ citation needed ] | https://en.wikipedia.org/wiki/Phase_converter |
A phase diagram in physical chemistry , engineering , mineralogy , and materials science is a type of chart used to show conditions (pressure, temperature, etc.) at which thermodynamically distinct phases (such as solid, liquid or gaseous states) occur and coexist at equilibrium .
Common components of a phase diagram are lines of equilibrium or phase boundaries , which refer to lines that mark conditions under which multiple phases can coexist at equilibrium. Phase transitions occur along lines of equilibrium. Metastable phases are not shown in phase diagrams as, despite their common occurrence, they are not equilibrium phases.
Triple points are points on phase diagrams where lines of equilibrium intersect. Triple points mark conditions at which three different phases can coexist. For example, the water phase diagram has a triple point corresponding to the single temperature and pressure at which solid, liquid, and gaseous water can coexist in a stable equilibrium ( 273.16 K and a partial vapor pressure of 611.657 Pa ). The pressure on a pressure-temperature diagram (such as the water phase diagram shown) is the partial pressure of the substance in question. [ 1 ]
The solidus is the temperature below which the substance is stable in the solid state. The liquidus is the temperature above which the substance is stable in a liquid state. There may be a gap between the solidus and liquidus; within the gap, the substance consists of a mixture of crystals and liquid (like a " slurry "). [ 2 ]
Working fluids are often categorized on the basis of the shape of their phase diagram.
The simplest phase diagrams are pressure–temperature diagrams of a single simple substance, such as water . The axes correspond to the pressure and temperature . The phase diagram shows, in pressure–temperature space, the lines of equilibrium or phase boundaries between the three phases of solid , liquid , and gas .
The curves on the phase diagram show the points where the free energy (and other derived properties) becomes non-analytic: their derivatives with respect to the coordinates (temperature and pressure in this example) change discontinuously (abruptly). For example, the heat capacity of a container filled with ice will change abruptly as the container is heated past the melting point. The open spaces, where the free energy is analytic , correspond to single phase regions. Single phase regions are separated by lines of non-analytical behavior, where phase transitions occur, which are called phase boundaries .
In the diagram on the right, the phase boundary between liquid and gas does not continue indefinitely. Instead, it terminates at a point on the phase diagram called the critical point . This reflects the fact that, at extremely high temperatures and pressures, the liquid and gaseous phases become indistinguishable, [ 3 ] in what is known as a supercritical fluid . In water, the critical point occurs at around T c = 647.096 K (373.946 °C), p c = 22.064 MPa (217.75 atm) and ρ c = 356 kg/m 3 . [ 4 ]
The existence of the liquid–gas critical point reveals a slight ambiguity in labelling the single phase regions. When going from the liquid to the gaseous phase, one usually crosses the phase boundary, but it is possible to choose a path that never crosses the boundary by going to the right of the critical point. Thus, the liquid and gaseous phases can blend continuously into each other. The solid–liquid phase boundary can only end in a critical point if the solid and liquid phases have the same symmetry group . [ 5 ]
For most substances, the solid–liquid phase boundary (or fusion curve) in the phase diagram has a positive slope so that the melting point increases with pressure. This is true whenever the solid phase is denser than the liquid phase. [ 6 ] The greater the pressure on a given substance, the closer together the molecules of the substance are brought to each other, which increases the effect of the substance's intermolecular forces . Thus, the substance requires a higher temperature for its molecules to have enough energy to break out of the fixed pattern of the solid phase and enter the liquid phase. A similar concept applies to liquid–gas phase changes. [ 7 ]
Water is an exception which has a solid-liquid boundary with negative slope so that the melting point decreases with pressure. This occurs because ice (solid water) is less dense than liquid water, as shown by the fact that ice floats on water. At a molecular level, ice is less dense because it has a more extensive network of hydrogen bonding which requires a greater separation of water molecules. [ 6 ] Other exceptions include antimony and bismuth . [ 8 ] [ 9 ]
At very high pressures above 50 GPa (500 000 atm), liquid nitrogen undergoes a liquid-liquid phase transition to a polymeric form and becomes denser than solid nitrogen at the same pressure. Under these conditions therefore, solid nitrogen also floats in its liquid. [ 10 ]
The value of the slope d P /d T is given by the Clausius–Clapeyron equation for fusion (melting) [ 11 ]
where Δ H fus is the heat of fusion which is always positive, and Δ V fus is the volume change for fusion. For most substances Δ V fus is positive so that the slope is positive. However for water and other exceptions, Δ V fus is negative so that the slope is negative.
In addition to temperature and pressure, other thermodynamic properties may be graphed in phase diagrams. Examples of such thermodynamic properties include specific volume , specific enthalpy , or specific entropy . For example, single-component graphs of temperature vs. specific entropy ( T vs. s ) for water/ steam or for a refrigerant are commonly used to illustrate thermodynamic cycles such as a Carnot cycle , Rankine cycle , or vapor-compression refrigeration cycle.
Any two thermodynamic quantities may be shown on the horizontal and vertical axes of a two-dimensional diagram. Additional thermodynamic quantities may each be illustrated in increments as a series of lines—curved, straight, or a combination of curved and straight. Each of these iso- lines represents the thermodynamic quantity at a certain constant value.
It is possible to envision three-dimensional (3D) graphs showing three thermodynamic quantities. [ 12 ] [ 13 ] For example, for a single component, a 3D Cartesian coordinate type graph can show temperature ( T ) on one axis, pressure ( p ) on a second axis, and specific volume ( v ) on a third. Such a 3D graph is sometimes called a p – v – T diagram. The equilibrium conditions are shown as curves on a curved surface in 3D with areas for solid, liquid, and vapor phases and areas where solid and liquid, solid and vapor, or liquid and vapor coexist in equilibrium. A line on the surface called a triple line is where solid, liquid and vapor can all coexist in equilibrium. The critical point remains a point on the surface even on a 3D phase diagram.
An orthographic projection of the 3D p – v – T graph showing pressure and temperature as the vertical and horizontal axes collapses the 3D plot into the standard 2D pressure–temperature diagram. When this is done, the solid–vapor, solid–liquid, and liquid–vapor surfaces collapse into three corresponding curved lines meeting at the triple point, which is the collapsed orthographic projection of the triple line.
Other much more complex types of phase diagrams can be constructed, particularly when more than one pure component is present. In that case, concentration becomes an important variable. Phase diagrams with more than two dimensions can be constructed that show the effect of more than two variables on the phase of a substance. Phase diagrams can use other variables in addition to or in place of temperature, pressure and composition, for example the strength of an applied electrical or magnetic field, and they can also involve substances that take on more than just three states of matter.
One type of phase diagram plots temperature against the relative concentrations of two substances in a binary mixture called a binary phase diagram , as shown at right. Such a mixture can be either a solid solution , eutectic or peritectic , among others. These two types of mixtures result in very different graphs. Another type of binary phase diagram is a boiling-point diagram for a mixture of two components, i. e. chemical compounds . For two particular volatile components at a certain pressure such as atmospheric pressure , a boiling-point diagram shows what vapor (gas) compositions are in equilibrium with given liquid compositions depending on temperature. In a typical binary boiling-point diagram, temperature is plotted on a vertical axis and mixture composition on a horizontal axis.
A two component diagram with components A and B in an "ideal" solution is shown. The construction of a liquid vapor phase diagram assumes an ideal liquid solution obeying Raoult's law and an ideal gas mixture obeying Dalton's law of partial pressure . A tie line from the liquid to the gas at constant pressure would indicate the two compositions of the liquid and gas respectively. [ 14 ]
A simple example diagram with hypothetical components 1 and 2 in a non- azeotropic mixture is shown at right. The fact that there are two separate curved lines joining the boiling points of the pure components means that the vapor composition is usually not the same as the liquid composition the vapor is in equilibrium with. See Vapor–liquid equilibrium for more information.
In addition to the above-mentioned types of phase diagrams, there are many other possible combinations. Some of the major features of phase diagrams include congruent points, where a solid phase transforms directly into a liquid. There is also the peritectoid , a point where two solid phases combine into one solid phase during cooling. The inverse of this, when one solid phase transforms into two solid phases during cooling, is called the eutectoid .
A complex phase diagram of great technological importance is that of the iron – carbon system for less than 7% carbon (see steel ).
The x-axis of such a diagram represents the concentration variable of the mixture. As the mixtures are typically far from dilute and their density as a function of temperature is usually unknown, the preferred concentration measure is mole fraction . A volume-based measure like molarity would be inadvisable.
A system with three components is called a ternary system. At constant pressure the maximum number of independent variables is three – the temperature and two concentration values. For a representation of ternary equilibria a three-dimensional phase diagram is required. Often such a diagram is drawn with the composition as a horizontal plane and the temperature on an axis perpendicular to this plane. To represent composition in a ternary system an equilateral triangle is used, called Gibbs triangle (see also Ternary plot ).
The temperature scale is plotted on the axis perpendicular to the composition triangle. Thus, the space model of a ternary phase diagram is a right-triangular prism. The prism sides represent corresponding binary systems A-B, B-C, A-C.
However, the most common methods to present phase equilibria in a ternary system are the following:
1) projections on the concentration triangle ABC of the liquidus, solidus, solvus surfaces;
2) isothermal sections;
3) vertical sections. [ 15 ]
Polymorphic and polyamorphic substances have multiple crystal or amorphous phases, which can be graphed in a similar fashion to solid, liquid, and gas phases.
Some organic materials pass through intermediate states between solid and liquid; these states are called mesophases . Attention has been directed to mesophases because they enable display devices and have become commercially important through the so-called liquid-crystal technology. Phase diagrams are used to describe the occurrence of mesophases. [ 17 ] | https://en.wikipedia.org/wiki/Phase_diagram |
For any complex number written in polar form (such as r e i θ ), the phase factor is the complex exponential ( e iθ ), where the variable θ is the phase of a wave or other periodic function. The phase factor is a unit complex number , i.e. a complex number of absolute value 1 . It is commonly used in quantum mechanics and optics . It is a special case of phasors , which may have arbitrary magnitude (i.e. not necessarily on the unit circle in the complex plane ).
Multiplying the equation of a plane wave Ae i ( k · r − ωt ) by a phase factor r e iθ shifts the phase of the wave by θ : e i θ A e i ( k ⋅ r − ω t ) = A e i ( k ⋅ r − ω t + θ ) . {\displaystyle e^{i\theta }A\,e^{i({\mathbf {k} \cdot \mathbf {r} -\omega t})}=A\,e^{i({\mathbf {k} \cdot \mathbf {r} -\omega t+\theta })}.}
In quantum mechanics, a phase factor is a complex coefficient e iθ that multiplies a ket | ψ ⟩ {\displaystyle |\psi \rangle } or bra ⟨ ϕ | {\displaystyle \langle \phi |} . It does not, in itself, have any physical meaning, since the introduction of a phase factor does not change the expectation values of a Hermitian operator . That is, the values of ⟨ ϕ | A | ϕ ⟩ {\displaystyle \langle \phi |A|\phi \rangle } and ⟨ ϕ | A e i θ | ϕ ⟩ {\displaystyle \langle \phi |Ae^{i\theta }|\phi \rangle } are the same. [ 1 ] However, differences in phase factors between two interacting quantum states can sometimes be measurable (such as in the Berry phase ) and this can have important consequences.
In optics, the phase factor is an important quantity in the treatment of interference . | https://en.wikipedia.org/wiki/Phase_factor |
Phase inversion or phase separation is a chemical phenomenon exploited in the fabrication of artificial membranes . It is performed by removing the solvent from a liquid-polymer solution, leaving a porous, solid membrane.
Phase inversion is a common method to form filtration membranes, which are typically formed using artificial polymers . The method of phase inversion is highly dependent on the type of polymer used and the solvent used to dissolve the polymer.
Phase inversion can be carried out through one of four typical methods: [ 1 ]
The rate at which phase inversion occurs and the characteristics of the resulting membrane are dependent on several factors, including: [ 2 ]
Phase inversion membranes are typically characterized by their mean pore diameter and pore diameter distribution. This can be measured using a number of established analytical techniques such as the analysis of gas adsorption-desorption isotherms, porosimetry, or more niche approaches such as Evapoporometry . A Scanning electron microscope (SEM) can be used to characterize membranes with larger pore sizes, such as microfiltration and ultrafiltration membranes, while Transmission electron microscopy (TEM) can be used for all membrane types, including small pore membranes such as nanofiltration and reverse osmosis , though optical techniques tend to analyze only a small sample area that may not be representative of the sample as a whole. | https://en.wikipedia.org/wiki/Phase_inversion_(chemistry) |
In quantum computing , phase kickback refers to the fact that controlled operations have effects on their controls, in addition to on their targets, and that these effects correspond to phasing operations. [ 1 ] [ 2 ] [ 3 ]
When a controlled operation, such as a Controlled NOT (CNOT) gate , is applied to two qubits, the phase of the second (target) qubit is conditioned on the state of the first (control) qubit.
Because the phase of the second qubit is being “kicked back” to the first qubit, this phenomenon was coined “phase kickback” in 1997 by Richard Cleve , Artur Ekert , Chiara Macchiavello, and Michele Mosca through a paper that solved the Deustch-Jozsa problem. [ 4 ] [ non-primary source needed ]
For example, when a controlled NOT gate 's target qubit is in the state 1 / 2 ( | 0 ⟩ − | 1 ⟩ ) {\displaystyle 1/{\sqrt {2}}(|0\rangle -|1\rangle )} , the effect of the controlled NOT gate is equivalent to the effect of applying a Pauli Z gate to the controlled NOT's control qubit.
Phase kickback is one of the key effects that distinguishes quantum computation from classical computation.
Phase kickback also provides a justification for why qubits would be disrupted by measurements: a measurement is an operation that flips a classical bit (the result) with the flip being controlled by a quantum bit (the qubit being measured).
This creates kickback from the bit to the qubit, randomizing the qubit's phase.
Phase kickback occurs because the basis transformations that distinguish targets from controls are available as operations.
For example, surrounding a controlled NOT gate with four Hadamard gates produces a compound operation whose effect is equivalent to a controlled NOT gate, but with the roles of its control qubit and target qubit exchanged.
More abstractly, phase kickback occurs because the eigendecomposition of controlled operations makes no significant distinction between controls and targets.
For example, the controlled Z gate is a symmetric operation that has the same effect if its target and control are switched, and a controlled NOT gate can be decomposed into a Hadamard gate on its target, then a controlled Z gate, then a second Hadamard gate on its target. [ 5 ] This decomposition reveals that, at the core of the apparently-asymmetric controlled-NOT gate, there is a symmetric effect that does not distinguish between control and target.
Phase kickback can be used to measure an operator P {\displaystyle P} whose eigenvalues are +1 and -1.
This is a common technique for measuring operators in quantum error correcting codes , such as the surface code . [ 6 ] The procedure is as follows.
Initialize a control qubit c {\displaystyle c} in the | 0 ⟩ {\displaystyle |0\rangle } state, then apply a Hadamard gate H {\displaystyle H} to c {\displaystyle c} , then apply P {\displaystyle P} controlled by c {\displaystyle c} , then apply another Hadamard gate H {\displaystyle H} to c {\displaystyle c} , then measure c {\displaystyle c} in the computational basis.
Phase kickback results in the +1 eigenstates of P {\displaystyle P} having no effect on c {\displaystyle c} , while -1 eigenstates apply a Pauli Z {\displaystyle Z} to c {\displaystyle c} .
The surrounding Hadamard gates turn the Pauli Z {\displaystyle Z} (a phase flip) into a Pauli X {\displaystyle X} (a bit flip).
So c {\displaystyle c} gets flipped from | 0 ⟩ {\displaystyle |0\rangle } to | 1 ⟩ {\displaystyle |1\rangle } when the state is in the -1 eigenstate of P {\displaystyle P} .
The measurement operation reveals whether c {\displaystyle c} is | 0 ⟩ {\displaystyle |0\rangle } or | 1 ⟩ {\displaystyle |1\rangle } , which reveals whether the state was in the +1 or -1 eigenspace of P {\displaystyle P} .
Phase kickback requires the following conditions to be met: [ 7 ]
| 1 ⟩ | ψ ⟩ → Controlled − U | 1 ⟩ U | ψ ⟩ = | 1 ⟩ e i ϕ ≅ | 1 ⟩ | ψ ⟩ {\displaystyle |1\rangle |\psi \rangle \xrightarrow {{\text{Controlled}}-U} |1\rangle U|\psi \rangle =|1\rangle e^{i\phi }\cong |1\rangle |\psi \rangle } This shows that if the control qubit is not in a superposition, phase kickback will not occur and the output of the controlled operation will be equal to the input.
Quantum Fourier transform is the quantum analogue of the classical discrete Fourier transform (DFT), as it takes quantum states represented as superpositions of basis states, and utilizes phase kickback to transform them into frequency-domain representation.
The phase kickback phenomenon occurs in the QFT algorithm when a controlled phase rotation gate is applied to a qubit in superposition – the Fourier transform will take the output of the phase kickback state back to the initial control qubit. [ 8 ]
Quantum phase estimation (QPE) is a quantum algorithm that exploits phase kickback to efficiently estimate the eigenvalues of unitary operators. It is a crucial part of many quantum algorithms, including Shor’s algorithm, for integer factorization .
To estimate the phase angle corresponding to the eigenvalue | ψ ⟩ {\displaystyle |\psi \rangle } of a unitary operator U {\displaystyle U} , the algorithm must:
Phase kickback allows a quantum setup to estimate eigenvalues exponentially quicker than classical algorithms. This is essential for quantum algorithms such as Shor’s algorithm , where quantum phase estimation is used to factor large integers efficiently. [ 8 ]
The Deustch–Jozsa algorithm , and by association the Bernstein-Vazirani algorithm , determines whether an inputted function is constant (same value for all inputs) or balanced (half 0s and half 1s) using as few queries to the black box function as possible. Phase kickback is critical; when the oracle is applied to the superposition state, it introduces phase kickback depending on whether the function is constant or balanced. If the function is constant, the oracle flips the sign of the amplitude of all input states, leading to constructive interference among all states. This allows a high probability of measuring the all-zero state. The flipping of signs of the input states requires phase kickback. On the other hand, when the function is balanced, the oracle does not introduce any phase kickback and the interference pattern among the states already cancels out as it is. This leads to an equal probability of measuring any of the input states. [ 9 ]
Grover’s algorithm is a quantum algorithm for unstructured search that finds the unique input to a black box function given its output. Phase kickback occurs in Grover's algorithm during the application of the oracle, which is typically a controlled operator that flips the sign of the target qubit's state. When this controlled operation is applied to the target qubit, the sign is flipped, and the phase of the target qubit is transferred backwards to the control qubit. In other words, the oracle can highlight certain target states by modifying the phase of the corresponding control qubit. [ 10 ] This has impactful applications as a problem-solving tool, demonstration of performance advantages in quantum computing, and quantum cryptography .
As seen, phase kickback is a crucial step in many famous, powerful quantum algorithms and applications. Its ability to transfer states backwards also enables other concepts such as quantum error correction and quantum teleportation . | https://en.wikipedia.org/wiki/Phase_kickback |
In mathematics , a phase portrait is a geometric representation of the orbits of a dynamical system in the phase plane . Each set of initial conditions is represented by a different point or curve .
Phase portraits are an invaluable tool in studying dynamical systems. They consist of a plot of typical trajectories in the phase space . This reveals information such as whether an attractor , a repellor or limit cycle is present for the chosen parameter value. The concept of topological equivalence is important in classifying the behaviour of systems by specifying when two different phase portraits represent the same qualitative dynamic behavior. An attractor is a stable point which is also called a "sink". The repeller is considered as an unstable point, which is also known as a "source".
A phase portrait graph of a dynamical system depicts the system's trajectories (with arrows) and stable steady states (with dots) and unstable steady states (with circles) in a phase space. The axes are of state variables .
A phase portrait represents the directional behavior of a system of ordinary differential equations (ODEs). The phase portrait can indicate the stability of the system. [ 1 ]
The phase portrait behavior of a system of ODEs can be determined by the eigenvalues or the trace and determinant (trace = λ 1 + λ 2 , determinant = λ 1 x λ 2 ) of the system. [ 1 ]
Determinant < 0
0 < determinant < (trace 2 / 4)
(trace 2 / 4) < determinant | https://en.wikipedia.org/wiki/Phase_portrait |
In physics, the phase problem is the problem of loss of information concerning the phase that can occur when making a physical measurement. The name comes from the field of X-ray crystallography , where the phase problem has to be solved for the determination of a structure from diffraction data. [ 1 ] The phase problem is also met in the fields of imaging and signal processing . [ 2 ] Various approaches of phase retrieval have been developed over the years.
Light detectors, such as photographic plates or CCDs , measure only the intensity of the light that hits them. This measurement is incomplete (even when neglecting other degrees of freedom such as polarization and angle of incidence ) because a light wave has not only an amplitude (related to the intensity), but also a phase (related to the direction), and polarization which are systematically lost in a measurement. [ 2 ] In diffraction or microscopy experiments, the phase part of the wave often contains valuable information on the studied specimen. The phase problem constitutes a fundamental limitation ultimately related to the nature of measurement in quantum mechanics .
In X-ray crystallography , the diffraction data when properly assembled gives the amplitude of the 3D Fourier transform of the molecule's electron density in the unit cell . [ 1 ] If the phases are known, the electron density can be simply obtained by Fourier synthesis . This Fourier transform relation also holds for two-dimensional far-field diffraction patterns (also called Fraunhofer diffraction ) giving rise to a similar type of phase problem.
There are several ways to retrieve the lost phases. The phase problem must be solved in x-ray crystallography , [ 1 ] neutron crystallography , [ 3 ] and electron crystallography . [ 4 ] [ 5 ] [ 6 ]
Not all of the methods of phase retrieval work with every wavelength (x-ray, neutron, and electron) used in crystallography.
If the crystal diffracts to high resolution (<1.2 Å), the initial phases can be estimated using direct methods. [ 1 ] Direct methods can be used in x-ray crystallography , [ 1 ] neutron crystallography , [ 7 ] and electron crystallography . [ 4 ] [ 5 ]
A number of initial phases are tested and selected by this method. The other is the Patterson method, which directly determines the positions of heavy atoms. The Patterson function gives a large value in a position which corresponds to interatomic vectors. This method can be applied only when the crystal contains heavy atoms or when a significant fraction of the structure is already known.
For molecules whose crystals provide reflections in the sub-Ångström range, it is possible to determine phases by brute force methods, testing a series of phase values until spherical structures are observed in the resultant electron density map. This works because atoms have a characteristic structure when viewed in the sub-Ångström range. The technique is limited by processing power and data quality. For practical purposes, it is limited to "small molecules" and peptides because they consistently provide high-quality diffraction with very few reflections.
Phases can also be inferred by using a process called molecular replacement , where a similar molecule's already-known phases are grafted onto the intensities of the molecule at hand, which are observationally determined. These phases can be obtained experimentally from a homologous molecule or if the phases are known for the same molecule but in a different crystal, by simulating the molecule's packing in the crystal and obtaining theoretical phases. Generally, these techniques are less desirable since they can severely bias the solution of the structure. They are useful, however, for ligand binding studies, or between molecules with small differences and relatively rigid structures (for example derivatizing a small molecule).
Multiple isomorphous replacement (MIR) , where heavy atoms are inserted into structure (usually by synthesizing proteins with analogs or by soaking)
A powerful solution is the multi-wavelength anomalous dispersion (MAD) method. In this technique, atoms' inner electrons [ clarification needed ] absorb X-rays of particular wavelengths, and reemit the X-rays after a delay, inducing a phase shift in all of the reflections, known as the anomalous dispersion effect . Analysis of this phase shift (which may be different for individual reflections) results in a solution for the phases. Since X-ray fluorescence techniques (like this one) require excitation at very specific wavelengths, it is necessary to use synchrotron radiation when using the MAD method.
In many cases, an initial set of phases are determined, and the electron density map for the diffraction pattern is calculated. Then the map is used to determine portions of the structure, which portions are used to simulate a new set of phases. This new set of phases is known as a refinement . These phases are reapplied to the original amplitudes, and an improved electron density map is derived, from which the structure is corrected. This process is repeated until an error term (usually R free {\displaystyle R_{\textrm {free}}} ) has stabilized to a satisfactory value. Because of the phenomenon of phase bias , it is possible for an incorrect initial assignment to propagate through successive refinements, so satisfactory conditions for a structure assignment are still a matter of debate. Indeed, some spectacular incorrect assignments have been reported, including a protein where the entire sequence was threaded backwards. [ 8 ] | https://en.wikipedia.org/wiki/Phase_problem |
In quantum computing , and more specifically in superconducting quantum computing , the phase qubit is a superconducting device based on the superconductor–insulator–superconductor (SIS) Josephson junction , [ 1 ] designed to operate as a quantum bit , or qubit. [ 2 ]
The phase qubit is closely related, yet distinct from, the flux qubit and the charge qubit , which are also quantum bits implemented by superconducting devices. The major distinction among the three is the ratio of Josephson energy vs charging energy [ 3 ] (the necessary energy for one Cooper pair to charge the total capacitance in the circuit):
A phase qubit is a current-biased Josephson junction, operated in the zero voltage state with a non-zero current bias.
A Josephson junction is a tunnel junction , [ 6 ] made of two pieces of superconducting metal separated by a very thin insulating barrier, about 1 nm in thickness. The barrier is thin enough that electrons, or in the superconducting state, Cooper-paired electrons, can tunnel through the barrier at an appreciable rate. Each of the superconductors that make up the Josephson junction is described by a macroscopic wavefunction , as described by the Ginzburg–Landau theory for superconductors. [ 7 ] The difference in the complex phases of the two superconducting wavefunctions is the most important dynamic variable for the Josephson junction, and is called the phase difference δ {\displaystyle \delta } , or simply "phase".
The Josephson equation [ 1 ] relates the superconducting current (usually called the supercurrent) I {\displaystyle I} through the tunnel junction to the phase difference δ {\displaystyle \delta } ,
Here I 0 {\displaystyle I_{0}} is the critical current of the tunnel junction, determined by the area and thickness of the tunnel barrier in the junction, and by the properties of the superconductors on either side of the barrier. For a junction with identical superconductors on either side of the barrier, the critical current is related to the superconducting gap Δ {\displaystyle \Delta } and the normal state resistance R n {\displaystyle R_{n}} of the tunnel junction by the Ambegaokar–Baratoff formula [ 6 ]
The Gor'kov phase evolution equation [ 1 ] gives the rate of change of the phase (the "velocity" of the phase) as a linear function of the voltage V {\displaystyle V} as
This equation is a generalization of the Schrödinger equation for the phase of the BCS wavefunction . The generalization was carried out by Gor'kov in 1958. [ 8 ]
The alternative and direct current Josephson relations control the behavior of the Josephson junction itself. The geometry of the Josephson junction—two plates of superconducting metal separated by a thin tunnel barrier—is that of a parallel plate capacitor, so in addition to the Josephson element the device includes a parallel capacitance C {\displaystyle C} . The external circuit is usually simply modeled as a resistor R {\displaystyle R} in parallel with the Josephson element. The set of three parallel circuit elements is biased by an external current source I {\displaystyle I} , thus the current-biased Josephson junction. [ 9 ] Solving the circuit equations yields a single dynamic equation for the phase,
The terms on the left side are identical to those of a particle with coordinate (location) δ {\displaystyle \delta } , with mass proportional to the capacitance C {\displaystyle C} , and with friction inversely proportional to the resistance R {\displaystyle R} . The particle moves in a conservative force field given by the term on the right, which corresponds to the particle interacting with a potential energy U ( δ ) {\displaystyle U(\delta )} given by
This is the "washboard potential", [ 9 ] so-called because it has an overall linear dependence − I δ {\displaystyle -I\,\delta } , modulated by the washboard modulation − I 0 cos δ {\displaystyle -I_{0}\,\cos \delta } .
The zero voltage state describes one of the two distinct dynamic behaviors displayed by the phase particle, and corresponds to when the particle is trapped in one of the local minima in the washboard potential. These minima exist for bias currents | I | < I 0 {\displaystyle \left|I\right|<I_{0}} , i.e. for currents below the critical current. With the phase particle trapped in a minimum, it has zero average velocity and therefore zero average voltage. A Josephson junction will allow currents up to I 0 {\displaystyle I_{0}} to pass through without any voltage; this corresponds to the superconducting branch of the Josephson junction's current–voltage characteristic .
The voltage state is the other dynamic behavior displayed by a Josephson junction, and corresponds to the phase particle free-running down the slope of the potential, with a non-zero average velocity and therefore non-zero voltage. This behavior always occurs for currents I {\displaystyle I} above the critical current, i.e. for | I | > I 0 {\displaystyle \left|I\right|>I_{0}} , and for large resistances R {\displaystyle R} also occurs for currents somewhat below the critical current. This state corresponds to the voltage branch of the Josephson junction current–voltage characteristic. For large resistance junctions the zero-voltage and voltage branches overlap for some range of currents below the critical current, so the device behavior is hysteretic .
Another way to understand the behavior of a Josephson junction in the zero-voltage state is to consider the SIS tunnel junction as a nonlinear inductor. [ 10 ] When the phase is trapped in one of the minima, the phase value is limited to a small range about the phase value at the potential minimum, which we will call δ 0 {\displaystyle \delta _{0}} . The current through the junction is related to this phase value by
If we consider small variations Δ δ {\displaystyle \Delta \delta } in the phase about the minimum δ 0 {\displaystyle \delta _{0}} (small enough to maintain the junction in the zero voltage state), then the current will vary by
These variations in the phase give rise to a voltage through the ac Josephson relation ,
This last relation is the defining equation for an inductor with inductance
This inductance depends on the value of phase δ 0 {\displaystyle \delta _{0}} at the minimum in the washboard potential, so the inductance value can be controlled by changing the bias current I {\displaystyle I} . For zero bias current, the inductance reaches its minimum value,
As the bias current increases, the inductance increases. When the bias current is very close (but less than) the critical current I 0 {\displaystyle I_{0}} , the value of the phase δ 0 {\displaystyle \delta _{0}} is very close to π / 2 {\displaystyle \pi /2} , as seen by the dc Josephson relation , above. This means that the inductance value L {\displaystyle L} becomes very large, diverging as I {\displaystyle I} reaches the critical current I 0 {\displaystyle I_{0}} .
The nonlinear inductor represents the response of the Josephson junction to changes in bias current. When the parallel capacitance from the device geometry is included, in parallel with the inductor, this forms a nonlinear L C {\displaystyle LC} resonator, with resonance frequency
which is known as the plasma frequency of the junction. This corresponds to the oscillation frequency of the phase particle in the bottom of one of the minima of the washboard potential.
For bias currents very near the critical current, the phase value in the washboard minimum is
and the plasma frequency is then
clearly showing that the plasma frequency approaches zero as the bias current approaches the critical current.
The simple tunability of the current-biased Josephson junction in its zero voltage state is one of the key advantages the phase qubit has over some other qubit implementations, although it also limits the performance of this device, as fluctuations in current generate fluctuations in the plasma frequency, which causes dephasing of the quantum states.
The phase qubit is operated in the zero-voltage state, with | I | < I 0 {\displaystyle \left|I\right|<I_{0}} . At very low temperatures, much less than 1 K (achievable using a cryogenic system known as a dilution refrigerator ), with a sufficiently high resistance and small capacitance Josephson junction, quantum energy levels [ 11 ] become detectable in the local minima of the washboard potential. These were first detected using microwave spectroscopy , where a weak microwave signal is added to the current I {\displaystyle I} biasing the junction. Transitions from the zero voltage state to the voltage state were measured by monitoring the voltage across the junction. Clear resonances at certain frequencies were observed, which corresponded well with the quantum transition energies obtained by solving the Schrödinger equation [ 12 ] for the local minimum in the washboard potential. Classically only a single resonance is expected, centered at the plasma frequency ω p {\displaystyle \omega _{p}} . Quantum mechanically, the potential minimum in the washboard potential can accommodate several quantized energy levels, with the lowest (ground to first excited state) transition at an energy E 01 ≈ ℏ ω p {\displaystyle E_{01}\approx \hbar \omega _{p}} , but the higher energy transitions (first to second excited state, second to third excited state) shifted somewhat below this due to the non-harmonic nature of the trapping potential minimum, whose resonance frequency falls as the energy increases in the minimum. Observing multiple, discrete levels in this fashion is extremely strong evidence that the superconducting device is behaving quantum mechanically, rather than classically.
The phase qubit uses the lowest two energy levels in the local minimum; the ground state | g ⟩ {\displaystyle |g\rangle } is the "zero state" of the qubit, and the first excited state | e ⟩ {\displaystyle |e\rangle } is the "one state". The slope in the washboard potential is set by the bias current I {\displaystyle I} , and changes in this current change the washboard potential, changing the shape of the local minimum (equivalently, changing the value of the nonlinear inductance, as discussed above). This changes the energy difference between the ground and first excited states. Hence the phase qubit has a tunable energy splitting. | https://en.wikipedia.org/wiki/Phase_qubit |
Phase reduction is a method used to reduce a multi-dimensional dynamical equation describing a nonlinear limit cycle oscillator into a one-dimensional phase equation. [ 1 ] [ 2 ] Many phenomena in our world such as chemical reactions, electric circuits, mechanical vibrations, cardiac cells, and spiking neurons are examples of rhythmic phenomena, and can be considered as nonlinear limit cycle oscillators. [ 2 ]
The theory of phase reduction method was first introduced in the 1950s, the existence of periodic solutions to nonlinear oscillators under perturbation , has been discussed by Malkin in, [ 3 ] in the 1960s, Winfree illustrated the importance of the notion of phase and formulated the phase model for a population of nonlinear oscillators in his studies on biological synchronization. [ 4 ] Since then, many researchers have discovered different rhythmic phenomena related to phase reduction theory.
Consider the dynamical system of the form
where x ∈ R N {\displaystyle x\in \mathbb {R} ^{N}} is the oscillator state variable, f ( x ) {\displaystyle f(x)} is the baseline vector field. Let φ : R N × R → R N {\displaystyle \varphi :\mathbb {R} ^{N}\times \mathbb {R} \rightarrow \mathbb {R} ^{N}} be the flow induced by the system, that is, φ ( x 0 , t ) {\displaystyle \varphi (x_{0},t)} is the solution of the system for the initial condition x ( 0 ) = x 0 {\displaystyle x(0)=x_{0}} . This system of differential equations can describe for a neuron model for conductance with x = ( V , n ) ∈ R N {\displaystyle x=(V,n)\in \mathbb {R} ^{N}} , where V {\displaystyle V} represents the voltage difference across the membrane and n {\displaystyle n} represents the ( N − 1 ) {\displaystyle (N-1)} -dimensional vector that defines gating variables . [ 5 ] When a neuron is perturbed by a stimulus current, the dynamics of the perturbed system will no longer be the same with the dynamics of the baseline neural oscillator.
The target here is to reduce the system by defining a phase for each point in some neighbourhood of the limit cycle. The allowance of sufficiently small perturbations (e.g. external forcing or stimulus effect to the system) might cause a large deviation of the phase, but the amplitude is perturbed slightly because of the attracting of the limit cycle. [ 6 ] Hence we need to extend the definition of the phase to points in the neighborhood of the cycle by introducing the definition of asymptotic phase (or latent phase). [ 7 ] This helps us to assign a phase to each point in the basin of attraction of a periodic orbit. The set of points in the basin of attraction of γ {\displaystyle \gamma } that share the same asymptotic phase Φ ( x ) {\displaystyle \Phi (x)} is called the isochron (e.g. see Figure 1 ), which were first introduced by Winfree. [ 8 ] Isochrons can be shown to exist for such a stable hyperbolic limit cycle γ {\displaystyle \gamma } . [ 9 ] So for all point x {\displaystyle x} in some neighbourhood of the cycle, the evolution of the phase φ = Φ ( x ) {\displaystyle \varphi =\Phi (x)} can be given by the relation d φ d t = ω {\displaystyle {\frac {d\varphi }{dt}}=\omega } , where ω = 2 π T 0 {\displaystyle \omega ={\frac {2\pi }{T_{0}}}} is the natural frequency of the oscillation. [ 5 ] [ 10 ] By the chain rule we then obtain an equation that govern the evolution of the phase of the neuron model is given by the phase model:
where ∇ Φ ( x ) {\displaystyle \nabla \Phi (x)} is the gradient of the phase function Φ ( x ) {\displaystyle \Phi (x)} with respect to the vector of the neuron's state vector x {\displaystyle x} , for the derivation of this result, see [ 2 ] [ 5 ] [ 10 ] This means that the N {\displaystyle N} -dimensional system describing the oscillating neuron dynamics is then reduced to a simple one-dimensional phase equation. One can notice that, it is impossible to retrieve the full information of the oscillator x {\displaystyle x} from the phase Φ {\displaystyle \Phi } because Φ ( x ) {\displaystyle \Phi (x)} is not one-to-one mapping. [ 2 ]
Consider now a weakly perturbed system of the form
where f ( x ) {\displaystyle f(x)} is the baseline vector field, ε g ( t ) {\displaystyle \varepsilon g(t)} is a weak periodic external forcing (or stimulus effect) of period T {\displaystyle T} , which can be different from T 0 {\displaystyle T_{0}} (in general), and frequency Ω = 2 π / T {\displaystyle \Omega =2\pi /T} , which might depend on the oscillator state x {\displaystyle x} . Assuming that the baseline neural oscillator (that is, when ε = 0 {\displaystyle \varepsilon =0} ) has an exponentially stable limit cycle γ {\displaystyle \gamma } with period T 0 {\displaystyle T_{0}} (example, see Figure 1 ) γ {\displaystyle \gamma } that is normally hyperbolic , [ 11 ] it can be shown that γ {\displaystyle \gamma } persists under small perturbations. [ 12 ] This implies that for a small perturbation, the perturbed system will remain close to the limit cycle. Hence we assume that such a limit cycle always exists for each neuron.
The evolution of the perturbed system in terms of the isochrons is [ 13 ]
where ∇ Φ ( x ) {\displaystyle \nabla \Phi (x)} is the gradient of the phase Φ ( x ) {\displaystyle \Phi (x)} with respect to the vector of the neuron's state vector x {\displaystyle x} , and g ( t ) {\displaystyle g(t)} is the stimulus effect driving the firing of the neuron as a function of time t {\displaystyle t} . This phase equation is a partial differential equation (PDE).
For a sufficiently small ε > 0 {\displaystyle \varepsilon >0} , a reduced phase model evaluated on the limit cycle γ {\displaystyle \gamma } of the unperturbed system can be given by, up to the first order of ε {\displaystyle \varepsilon } ,
where function Z ( φ ) := ∇ Φ ( γ ( t ) ) {\displaystyle Z(\varphi ):=\nabla \Phi (\gamma (t))} measures the normalized phase shift due to a small perturbation delivered at any point x {\displaystyle x} on the limit cycle γ {\displaystyle \gamma } , and is called the phase sensitivity function or infinitesimal phase response curve . [ 8 ] [ 13 ]
In order to analyze the reduced phase equation corresponding to the perturbed nonlinear system, we need to solve a PDE, which is not a trivial one. So we need to simplify it into an autonomous phase equation for φ {\displaystyle \varphi } , which can more easily be analyzed. [ 13 ] Assuming that the frequencies ω {\displaystyle \omega } and Ω {\displaystyle \Omega } are sufficiently small so that ω − Ω = ε δ {\displaystyle \omega -\Omega =\varepsilon \delta } , where δ {\displaystyle \delta } is O ( 1 ) {\displaystyle O(1)} , we can introduce a new phase function ψ ( t ) = φ ( t ) − Ω t {\displaystyle \psi (t)=\varphi (t)-\Omega t} . [ 13 ]
By the method of averaging , [ 14 ] assuming that ψ ( t ) {\displaystyle \psi (t)} does not vary within T {\displaystyle T} , we obtain an approximated phase equation
where Δ ε = ε δ {\displaystyle \Delta _{\varepsilon }=\varepsilon \delta } , and Γ ( ψ ) {\displaystyle \Gamma (\psi )} is a 2 π {\displaystyle 2\pi } -periodic function representing the effect of the periodic external forcing on the oscillator phase, [ 13 ] defined by
The graph of this function Γ ( ψ ) {\displaystyle \Gamma (\psi )} can be shown to exhibit the dynamics of the approximated phase model, for more illustrations see. [ 2 ]
For a sufficiently small perturbation of a certain nonlinear oscillator or a network of coupled oscillators, we can compute the corresponding phase sensitivity function or infinitesimal PRC Z ( φ ) {\displaystyle Z(\varphi )} . | https://en.wikipedia.org/wiki/Phase_reduction |
In signal processing , phase response is the relationship between the phase of a sinusoidal input and the output signal passing through any device that accepts input and produces an output signal, such as an amplifier or a filter . [ 1 ]
Amplifiers, filters, and other devices are often categorized by their amplitude and/or phase response. The amplitude response is the ratio of output amplitude to input, usually a function of the frequency. Similarly, phase response is the phase of the output with the input as reference. The input is defined as zero phase. A phase response is not limited to lying between 0° and 360°, as phase can accumulate to any amount of time.
This technology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Phase_response |
A phase response curve ( PRC ) illustrates the transient change ( phase response ) in the cycle period of an oscillation induced by a perturbation as a function of the phase at which it is received. PRCs are used in various fields; examples of biological oscillations are the heartbeat, circadian rhythms , and the regular, repetitive firing observed in some neurons in the absence of noise. [ 1 ] [ better source needed ]
In humans and animals, there is a regulatory system that governs the phase relationship of an organism's internal circadian clock to a regular periodicity in the external environment (usually governed by the solar day). In most organisms, a stable phase relationship is desired, though in some cases the desired phase will vary by season, especially among mammals with seasonal mating habits.
In circadian rhythm research, a PRC illustrates the relationship between a chronobiotic 's time of administration (relative to the internal circadian clock) and the magnitude of the treatment's effect on circadian phase. Specifically, a PRC is a graph showing, by convention, time of the subject's endogenous day along the x -axis and the amount of the phase shift (in hours) along the y -axis. Each curve has one peak and one trough in each 24-hour cycle. Relative circadian time is plotted against phase-shift magnitude. The treatment is usually narrowly specified as a set intensity and colour and duration of light exposure to the retina and skin, or a set dose and formulation of melatonin.
These curves are often consulted in the therapeutic setting. Normally, the body's various physiological rhythms will be synchronized within an individual organism (human or animal), usually with respect to a master biological clock. Of particular importance is the sleep–wake cycle. Various sleep disorders and externals stresses (such as jet lag ) can interfere with this. Humans with non-24-hour sleep–wake disorder often experience an inability to maintain a consistent internal clock. Extreme chronotypes usually maintain a consistent clock, but find that their natural clock does not align with the expectations of their social environment. PRC curves provide a starting point for therapeutic intervention. The two common treatments used to shift the timing of sleep are light therapy , directed at the eyes, and administration of the hormone melatonin , usually taken orally. Either or both can be used daily. The phase adjustment is generally cumulative with consecutive daily administrations, and — at least partially — additive with concurrent administrations of distinct treatments. If the underlying disturbance is stable in nature, ongoing daily intervention is usually required. For jet lag, the intervention serves mainly to accelerate natural alignment, and ceases once desired alignment is achieved.
Note that phase response curves from the experimental setting are usually aggregates of the test population, that there can be mild or significant variation within the test population, that individuals with sleep disorders often respond atypically, and that the formulation of the chronobiotic might be specific to the experimental setting and not generally available in clinical practice (e.g. for melatonin, one sustained-release formulation might differ in its release rate as compared to another); also, while the magnitude is dose-dependent, [ 2 ] not all PRC graphs cover a range of doses. The discussions below are restricted to the PRCs for the light and melatonin in humans.
Starting about two hours before an individual's regular bedtime, exposure of the eyes to light will delay the circadian phase, causing later wake-up time and later sleep onset. The delaying effect gets stronger as evening progresses; it is also dependent on the wavelength and illuminance ("brightness") of the light. The effect is small if indoor lighting is dim (< 3 lux).
About five hours after usual bedtime, coinciding with the body temperature trough (the lowest point of the core body temperature during sleep) the PRC peaks and the effect changes abruptly from phase delay to phase advance. Immediately after this peak, light exposure has its greatest phase-advancing effect, causing earlier wake-up and sleep onset. Again, illuminance greatly affects results; indoor light may be less than 500 lux, while light therapy uses up to 10,000 lux. The effect diminishes until about two hours after spontaneous wake-up time, when it reaches approximately zero.
During the period between two hours after usual wake-up time and two hours before usual bedtime, light exposure has little or no effect on circadian phase (slight effects generally cancelling each other out).
Another image of the PRC for light is here (Figure 1). [ 3 ] Within that image, the explanatory text is
Light therapy, typically with a light box producing 10,000 lux at a prescribed distance, can be used in the evening to delay or in the morning to advance an individual's sleep timing. Because losing sleep to obtain bright light exposure is considered undesirable by most people, and because it is very difficult to estimate exactly when the greatest effect (the PRC peak) will occur in an individual, the treatment is usually applied daily just prior to bedtime (to achieve phase delay), or just after spontaneous awakening (to achieve phase advance).
In addition to its use in the adjustment of circadian rhythms, light therapy is used as treatment for several affective disorders including seasonal affective disorder (SAD). [ 5 ]
In 2002 Brown University researchers led by David Berson announced the discovery of special cells in the human eye , ipRGCs ( intrinsically photosensitive retinal ganglion cells ), [ 6 ] which, many researchers now believe, control the light entrainment effect of the phase response curve. In the human eye, the ipRGCs have the greatest response to light in the 460–480 nm (blue) range. In one experiment, 400 lux of blue light produced the same effects as 10,000 lux of white light from a fluorescent source. [ 7 ] A theory of spectral opponency, in which the addition of other spectral colors renders blue light less effective for circadian phototransduction, was supported by research reported in 2005. [ 8 ]
The phase response curve for melatonin is roughly twelve hours out of phase with the phase response curve for light. [ 9 ] At spontaneous wake-up time, exogenous (externally administered) melatonin has a slight phase-delaying effect. The amount of phase-delay increases until about eight hours after wake-up time, when the effect swings abruptly from strong phase delay to strong phase advance. The phase-advance effect diminishes as the day goes on until it reaches zero about bedtime. From usual bedtime until wake-up time, exogenous melatonin has no effect on circadian phase. [ 10 ] [ 11 ]
The human body produces its own ( endogenous ) melatonin starting about two hours before bedtime, provided the lighting is dim. This is known as dim-light melatonin onset , DLMO. [ 12 ] This stimulates the phase-advance portion of the PRC and helps keep the body on a regular sleep-wake schedule. It also helps prepare the body for sleep.
Administration of melatonin at any time may have a mild hypnotic (sleep-inducing) effect. The expected effect on sleep phase timing, if any, is predicted by the PRC.
In a 2006 study Victoria L. Revell et al. showed that a combination of morning bright light and afternoon melatonin, both timed to phase advance according to the respective PRCs, produce a larger phase advance shift than bright light alone, for a total of up to 2 1 ⁄ 2 hours over the course of 3 days. All times are approximate and vary from one individual to another. In particular, there is no convenient way to accurately determine the times of the peaks and zero-crossings of these curves in an individual. Administration of light or melatonin close to the time at which the effect is expected to change sense abruptly may, if the changeover time is not accurately known, produce an opposite effect to that desired. [ 13 ]
In a 2019 study Shawn D. Youngstedt et al. , showed that in humans "Exercise elicits circadian phase‐shifting effects, but additional information is needed. [...] Significant phase–response curves were established for aMT6(melatonin derivative) onset and acrophase with large phase delays from 7:00 pm to 10:00 pm and large phase advances at both 7:00 am and from 1:00 pm to 4:00 pm" [ 14 ]
The first published usage of the term "phase response curve" was in 1960 by Patricia DeCoursey . The "daily" activity rhythms of her flying squirrels , kept in constant darkness, responded to pulses of light exposure. The response varied according to the time of day – that is, the animals' subjective "day" – when light was administered. When DeCoursey plotted all her data relating the quantity and direction (advance or delay) of phase-shift on a single curve, she created the PRC. It has since been a standard tool in the study of biological rhythms. [ 15 ]
Phase response curve analysis can be used to understand the intrinsic properties and oscillatory behavior of regular- spiking neurons . [ 16 ] The neuronal PRCs can be classified as being purely positive (PRC type I) or as having negative parts (PRC type II). Importantly, the PRC type exhibited by a neuron is indicative of its input–output function (excitability) as well as synchronization behavior: networks of PRC type II neurons can synchronize their activity via mutual excitatory connections, but those of PRC type I can not. [ 17 ]
Experimental estimation of PRC in living, regular-spiking neurons involves measuring the changes in inter-spike interval in response to a small perturbation, such as a transient pulse of current. Notably, the PRC of a neuron is not fixed but may change when firing frequency [ 18 ] or neuromodulatory state of the neuron [ 19 ] is changed. | https://en.wikipedia.org/wiki/Phase_response_curve |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.