text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
In mathematics a polydivisible number (or magic number ) is a number in a given number base with digits abcde... that has the following properties: [ 1 ]
Let n {\displaystyle n} be a positive integer, and let k = ⌊ log b n ⌋ + 1 {\displaystyle k=\lfloor \log _{b}{n}\rfloor +1} be the number of digits in n written in base b . The number n is a polydivisible number if for all 1 ≤ i ≤ k {\displaystyle 1\leq i\leq k} ,
For example, 10801 is a seven-digit polydivisible number in base 4 , as
For any given base b {\displaystyle b} , there are only a finite number of polydivisible numbers.
The following table lists maximum polydivisible numbers for some bases b , where A−Z represent digit values 10 to 35.
Let n {\displaystyle n} be the number of digits. The function F b ( n ) {\displaystyle F_{b}(n)} determines the number of polydivisible numbers that has n {\displaystyle n} digits in base b {\displaystyle b} , and the function Σ ( b ) {\displaystyle \Sigma (b)} is the total number of polydivisible numbers in base b {\displaystyle b} .
If k {\displaystyle k} is a polydivisible number in base b {\displaystyle b} with n − 1 {\displaystyle n-1} digits, then it can be extended to create a polydivisible number with n {\displaystyle n} digits if there is a number between b k {\displaystyle bk} and b ( k + 1 ) − 1 {\displaystyle b(k+1)-1} that is divisible by n {\displaystyle n} . If n {\displaystyle n} is less or equal to b {\displaystyle b} , then it is always possible to extend an n − 1 {\displaystyle n-1} digit polydivisible number to an n {\displaystyle n} -digit polydivisible number in this way, and indeed there may be more than one possible extension. If n {\displaystyle n} is greater than b {\displaystyle b} , it is not always possible to extend a polydivisible number in this way, and as n {\displaystyle n} becomes larger, the chances of being able to extend a given polydivisible number become smaller. On average, each polydivisible number with n − 1 {\displaystyle n-1} digits can be extended to a polydivisible number with n {\displaystyle n} digits in b n {\displaystyle {\frac {b}{n}}} different ways. This leads to the following estimate for F b ( n ) {\displaystyle F_{b}(n)} :
Summing over all values of n, this estimate suggests that the total number of polydivisible numbers will be approximately
All numbers are represented in base b {\displaystyle b} , using A−Z to represent digit values 10 to 35.
The polydivisible numbers in base 5 are
The smallest base 5 polydivisible numbers with n digits are
The largest base 5 polydivisible numbers with n digits are
The number of base 5 polydivisible numbers with n digits are
The polydivisible numbers in base 10 are
The smallest base 10 polydivisible numbers with n digits are
The largest base 10 polydivisible numbers with n digits are
The number of base 10 polydivisible numbers with n digits are
The example below searches for polydivisible numbers in Python .
Polydivisible numbers represent a generalization of the following well-known [ 2 ] problem in recreational mathematics :
The solution to the problem is a nine-digit polydivisible number with the additional condition that it contains the digits 1 to 9 exactly once each. There are 2,492 nine-digit polydivisible numbers, but the only one that satisfies the additional condition is
Other problems involving polydivisible numbers include: | https://en.wikipedia.org/wiki/Polydivisible_number |
Polydnaviriformidae ( / p ɒ ˈ l ɪ d n ə v ɪ r ə ˌ f ɔː m ɪ d ɛ / PDV ) [ 1 ] is a family of insect viriforms ; members are known as polydnaviruses . There are two genera in the family: Bracoform and Ichnoviriform . Polydnaviruses form a symbiotic relationship with parasitoid wasps . Ichnoviriforms (IV) occur in Ichneumonid wasps and Bracoviriforms (BV) in Braconid wasps . The larvae of wasps in both of those groups are themselves parasitic on Lepidoptera (moths and butterflies), and the polydnaviruses are important in circumventing the immune response of their parasitized hosts. [ 2 ] [ 3 ] Little or no sequence homology exists between BV and IV, suggesting that the two genera have been evolving independently for a long time.
Bracoviriform
Ichnoviriform
Viruses in Polydnaviridae are enveloped , with prolate ellipsoid and cylindrical geometries. Genomes are circular and segmented, composed of multiple segments of double-stranded, superhelical DNA packaged in capsid proteins . They are around 2.0–31kb in length. [ 2 ]
Viral replication is nuclear. DNA-templated transcription is the method of transcription. The virus exits the host cell by nuclear pore export .
Parasitoid wasps serve as hosts for the virus, and Lepidoptera serve as hosts for these wasps. The female wasp injects one or more eggs into its host along with a quantity of virus. The virus and wasp are in a mutualistic symbiotic relationship: expression of viral genes prevents the wasp's host's immune system from killing the wasp's injected egg and causes other physiological alterations that ultimately cause the parasitized host to die. Transmission routes are parental. [ 2 ]
These viruses are part of a unique biological system consisting of an endoparasitic wasp ( parasitoid ), a host (usually lepidopteran ) larva, and the virus. The full genome of the virus is endogenous , dispersed among the genome of the wasp. The virus only replicates in a particular part of the ovary, called the calyx , of pupal and adult female wasps. The virus is injected along with the wasp egg into the body cavity of a lepidopteran host caterpillar and infects cells of the caterpillar. The infection does not lead to replication of new viruses; rather, it affects the caterpillar's immune system , as the virion carries virulence genes instead of viral replication genes. [ 5 ] It can be considered a type of viral vector . [ 6 ]
Without the virus infection, phagocytic hemocytes (blood cells) will encapsulate and kill the wasp egg and larvae, but the immune suppression caused by the virus allows survival of the wasp egg and larvae, leading to hatching and complete development of the immature wasp in the caterpillar. Additionally, genes expressed from the polydnavirus in the parasitised host alter host development and metabolism to be beneficial for the growth and survival of the parasitoid larva. [ 4 ] [ 7 ]
Both genera of PDV share certain characteristics:
The morphologies of the two genera are different when observed by electron microscopy. Ichnoviruses tend to be ovoid while bracoviruses are short rods. The virions of Bracoviruses are released by cell lysis ; the virions of Ichnoviruses are released by budding.
Nucleic acid analysis suggests a very long association of the viruses with the wasps (estimated 73.7 million years ± 10 million). [ 11 ]
Two proposals have been advanced for how the wasp/virus association developed. The first suggests that the virus is derived from wasp genes. Many parasitoids that do not use PDVs inject proteins that provide many of the same functions, that is, a suppression of the immune response to the parasite egg. In this model, the braconid and ichneumonid wasps packaged genes for these functions into the viruses—essentially creating a gene-transfer system that results in the caterpillar producing the immune-suppressing factors. In this scenario, the PDV structural proteins (capsids) were probably "borrowed" from existing viruses. [ 12 ]
The alternative proposal suggests that ancestral wasps developed a beneficial association with an existing virus that eventually led to the integration of the virus into the wasp's genome. Following integration, the genes responsible for virus replication and the capsids were (eventually) no longer included in the PDV genome. This hypothesis is supported by the distinct morphology differences between IV and BV, suggesting different ancestral viruses for the two genera. BV has likely evolved from a nudivirus , specifically a betanudivirus, [ 13 ] ~ 100 million years ago . [ 14 ] IV has a less clear origin: although earlier reports found a protein p44/p53 with structural similarities to ascovirus , the link was not confirmed in later studies. [ 15 ] As a result, the current opinion is that IV originated from a yet-unidentified novel viral family, [ 13 ] with a weak link to the NCLDVs . [ 16 ] In either case, both genera were formed through a single integration event in their respective wasp lineages. [ 5 ]
The two groups of viruses in the family are not in fact phylogenetically related suggesting that this taxon may need revision. [ 17 ]
In the host, several mechanisms of the insect immune system can be triggered when the wasp lays its eggs and when the parasitic wasp is developing. When a large body (wasp egg or small particle used experimentally) is introduced into an insect's body, the classic immune reaction is the encapsulation by hematocytes. An encapsulated body can also be melanised in order to asphyxiate it, thanks to another type of hemocyte, which uses the phenoloxidase pathway to produce melanin. Small particles can be phagocytosed, and macrophage cells can then be also melanised in a nodule. Finally, insects can also respond with production of antiviral peptides . [ 18 ]
PolyDNAvirus protect the hymenopteran larvae from the host immune system, acting at different levels.
Another strategy used by parasitoid Hymenoptera to protect their offspring is production of virus-like particles . VLPs are similar to viruses in their structure, but they don't carry any nucleic acid. For example, Venturia canescens ( Ichneumonidea ) and Leptopilina sp. ( Figitidaea ) produce VLPs.
VLPs can be compared to PolyDNAvirus because they are secreted in the same way, and they both act to protect the larvae against the host's immune system. V. canescens -VLPs (VcVLP1, VcVLP2, VcNEP ...) are produced in the calyx cells before they go to the oviducts. Work in 2006 did not find their link to any viruses and assumed a cellular origin. [ 12 ] More recent comparison links them to highly reshuffled domesticated Nudivirus sequences. This link produces the name Venturia canescens endogenous nudivirus (VcENV), an alphanudivirus closely related to NlENV found in Nilaparvata lugens . [ 25 ]
VLPs protect the Hymenoptera larvae locally, whereas polyDNAvirus can have a more global effect. VLPs allow the larvae to escape the immune system: the larva is not recognised as harmful by its host, or the immune cells can't interact with it thanks to the VLPs. [ 12 ] Venturia canescens uses these instead of polydnaviruses because its ichnovirus has been deactivated. [ 25 ]
The wasp Leptopilina heterotoma secrete VLPs that are able to penetrate into the lamellocytes , thanks to specific receptors, and then modify the shape and surface properties of the lamellocytes so they become inefficient and the larvae are safe from encapsulation. [ 26 ] The Leptopilina VLPs or mixed-strategy extracellular vesicles (MSEVs) contain some secretion systems. Their evolutionary picture is less clear, [ 27 ] but a recently reported virus, L. boulardi Filamentous Virus (LbFV), shows significant similarities. [ 28 ]
MicroRNA are small RNA fragments produced in the host cells thanks to a specific enzymatic mechanism. They promote viral RNA destruction. MicroRNA attach to viral-RNA because they are complementary. Then the complex is recognised by an enzyme that destroys it. This phenomenon is known as PTGS (for post transcriptional gene silencing) [ 29 ] or RNAi ( RNA interference .)
It is interesting to consider the microRNA phenomenon in the polyDNAvirus context. Many hypotheses can be formulated: | https://en.wikipedia.org/wiki/Polydnaviriformidae |
Polyelectrolytes are polymers whose repeating units bear an electrolyte group. Polycations and polyanions are polyelectrolytes. These groups dissociate in aqueous solutions (water), making the polymers charged . Polyelectrolyte properties are thus similar to both electrolytes ( salts ) and polymers (high molecular weight compounds) and are sometimes called polysalts . Like salts, their solutions are electrically conductive. Like polymers, their solutions are often viscous . Charged molecular chains, commonly present in soft matter systems, play a fundamental role in determining structure, stability and the interactions of various molecular assemblies. Theoretical approaches [ 1 ] [ 2 ] to describe their statistical properties differ profoundly from those of their electrically neutral counterparts, while technological and industrial fields exploit their unique properties. Many biological molecules are polyelectrolytes. For instance, polypeptides , glycosaminoglycans , and DNA are polyelectrolytes. Both natural and synthetic polyelectrolytes are used in a variety of industries.
polyelectrolyte : Polymer composed of macromolecules in which a substantial portion of the constitutional units contains ionic or ionizable groups, or both. (See Gold Book entry for note.) [ 3 ]
Acids are classified as either weak or strong (and bases similarly may be either weak or strong ). Similarly, polyelectrolytes can be divided into "weak" and "strong" types. A "strong" polyelectrolyte dissociates completely in solution for most reasonable pH values. A "weak" polyelectrolyte, by contrast, has a dissociation constant (pKa or pKb) in the range of ~2 to ~10, meaning that it will be partially dissociated at intermediate pH. Thus, weak polyelectrolytes are not fully charged in the solution, and moreover, their fractional charge can be modified by changing the solution pH, counter-ion concentration, or ionic strength.
The physical properties of polyelectrolyte solutions are usually strongly affected by this degree of ionization. Since the polyelectrolyte dissociation releases counter-ions, this necessarily affects the solution's ionic strength , and therefore the Debye length . This, in turn, affects other properties, such as electrical conductivity .
When solutions of two oppositely charged polymers (that is, a solution of polycation and one of polyanion ) are mixed, a bulk complex ( precipitate ) is usually formed. This occurs because the oppositely-charged polymers attract one another and bind together.
The conformation of any polymer is affected by a number of factors, notably the polymer architecture and the solvent affinity. In the case of polyelectrolytes, charge also has an effect. Whereas an uncharged linear polymer chain is usually found in a random conformation in solution (closely approximating a self-avoiding three-dimensional random walk ), the charges on a linear polyelectrolyte chain will repel each other via double layer forces , which causes the chain to adopt a more expanded, rigid-rod-like conformation. The charges will be screened if the solution contains a great deal of added salt. Consequently, the polyelectrolyte chain will collapse to a more conventional conformation (essentially identical to a neutral chain in good solvent ).
Polymer conformation affects many bulk properties (such as viscosity , turbidity , etc.). Although the statistical conformation of polyelectrolytes can be captured using variants of conventional polymer theory, it is, in general, quite computationally intensive to properly model polyelectrolyte chains, owing to the long-range nature of the electrostatic interaction.
Techniques such as static light scattering can be used to study polyelectrolyte conformation and conformational changes.
ampholytic polymer : Polyelectrolyte composed of macromolecules containing both cationic and anionic groups, or corresponding ionizable groups. (See Gold Book entry for note.) [ 4 ]
Polyelectrolytes that bear both cationic and anionic repeat groups are called polyampholytes . The competition between the acid-base equilibria of these groups leads to additional complications in their physical behavior. These polymers usually only dissolve when sufficient added salt screens the interactions between oppositely charged segments. In the case of amphoteric macroporous hydrogels, the action of concentrated salt solution does not lead to the dissolution of polyampholyte material due to the covalent cross-linking of macromolecules. Synthetic 3-D macroporous hydrogels shows the excellent ability to adsorb heavy-metal ions in a wide range of pH from extremely diluted aqueous solutions, which can be later used as an adsorbent for purification of salty water [ 5 ] [ 6 ] All proteins are polyampholytes, as some amino acids tend to be acidic, while others are basic.
Polyelectrolytes have many applications, mostly related to modifying flow and stability properties of aqueous solutions and gels . For instance, they can be used to destabilize a colloidal suspension and to initiate flocculation (precipitation). They can also be used to impart a surface charge to neutral particles, enabling them to be dispersed in aqueous solution. They are thus often used as thickeners , emulsifiers , conditioners , clarifying agents , and even drag reducers. They are used in water treatment and for oil recovery . Many soaps , shampoos , and cosmetics incorporate polyelectrolytes. Furthermore, they are added to many foods and to concrete mixtures ( superplasticizer ). Some of the polyelectrolytes that appear on food labels are pectin , carrageenan , alginates , and carboxymethyl cellulose . All but the last are of natural origin. Finally, they are used in various materials, including cement .
Because some of them are water-soluble, they are also investigated for biochemical and medical applications. There is currently much research on using biocompatible polyelectrolytes for implant coatings, controlled drug release, and other applications. Thus, recently, the biocompatible and biodegradable macroporous material composed of polyelectrolyte complex was described, where the material exhibited excellent proliferation of mammalian cells [ 7 ] and muscle like soft actuators.
Polyelectrolytes have been used in the formation of new types of materials known as polyelectrolyte multilayers (' PEMs ). These thin films are constructed using a layer-by-layer ( LbL ) deposition technique. During LbL deposition, a suitable growth substrate (usually charged) is dipped back and forth between dilute baths of positively and negatively charged polyelectrolyte solutions. During each dip, a small amount of polyelectrolyte is adsorbed, and the surface charge is reversed, allowing the gradual and controlled build-up of electrostatically cross-linked films of polycation-polyanion layers. Scientists have demonstrated thickness control of such films down to the single-nanometer scale. LbL films can also be constructed by substituting charged species such as nanoparticles or clay platelets [ 8 ] in place of or in addition to one of the polyelectrolytes. LbL deposition has also been accomplished using hydrogen bonding instead of electrostatics . For more information on multilayer creation, please see polyelectrolyte adsorption .
An LbL formation of PEM (PSS-PAH (poly(allylamine) hydrochloride)) on a gold substrate can be seen in the Figure. The formation is measured using multi-parametric surface plasmon resonance to determine adsorption kinetics, layer thickness, and optical density. [ 9 ]
The main benefits of PEM coatings are the ability to conformably coat objects (that is, the technique is not limited to coating flat objects), the environmental benefits of using water-based processes, reasonable costs, and the utilization of the particular chemical properties of the film for further modification, such as the synthesis of metal or semiconductor nanoparticles, or porosity phase transitions to create anti-reflective coatings , optical shutters , and superhydrophobic coatings.
If polyelectrolyte chains are added to a system of charged macroions (i.e., an array of DNA molecules), an interesting phenomenon called the polyelectrolyte bridging might occur. [ 10 ] The term bridging interactions is usually applied to the situation where a single polyelectrolyte chain can adsorb to two (or more) oppositely charged macroions (e.g. DNA molecule) thus establishing molecular bridges and, via its connectivity, mediate attractive interactions between them.
At small macroion separations, the chain is squeezed in between the macroions and electrostatic effects in the system are completely dominated by steric effects – the system is effectively discharged. As we increase the macroion separation, we simultaneously stretch the polyelectrolyte chain adsorbed to them. The stretching of the chain gives rise to the above-mentioned attractive interactions due to the chain's rubber elasticity .
Because of its connectivity, the behavior of the polyelectrolyte chain bears almost no resemblance to that of confined, unconnected ions.
In polymer terminology, a polyacid is a polyelectrolyte composed of macromolecules containing acid groups on a substantial fraction of the constitutional units . Most commonly, the acid groups are –COOH , –SO 3 H , or –PO 3 H 2 . [ 11 ] | https://en.wikipedia.org/wiki/Polyelectrolyte |
Adsorption of polyelectrolytes on solid substrates is a surface phenomenon where long-chained polymer molecules with charged groups (dubbed polyelectrolytes ) bind to a surface that is charged in the opposite polarity. On the molecular level, the polymers do not actually bond to the surface, but tend to "stick" to the surface via intermolecular forces and the charges created by the dissociation of various side groups of the polymer. Because the polymer molecules are so long, they have a large amount of surface area with which to contact the surface and thus do not desorb as small molecules are likely to do. This means that adsorbed layers of polyelectrolytes form a very durable coating. Due to this important characteristic of polyelectrolyte layers they are used extensively in industry as flocculants, for solubilization, as supersorbers, antistatic agents, as oil recovery aids, as gelling aids in nutrition, additives in concrete, or for blood compatibility enhancement to name a few. [ 1 ]
Models for the adsorption behavior of polyelectrolytes in solution to a solid surface are extremely situational. Vastly different behaviors are exhibited based on varying polyelectrolyte character and concentration, ionic strength of the solution, solid surface character, and pH, among several other factors. These complex models are specialized by application for certain parameters in order to create accurate models.
However, the general character of the process can be reasonably well modeled with a polyelectrolyte in solution, and an oppositely charged surface where no covalent interaction between the surface and chain occurs. This model for the adsorbed amount of polyelectrolyte at a charged surface is derived from DLVO theory , which models the interaction of charged particles in solution, and mean field theory , which simplifies systems for analysis. [ 2 ]
Using a modified Poisson-Boltzmann equation and mean field equation, the concentration profile near a charged surface is solved numerically. The solution of these equations yields a simple relation for the adsorbed amount, Γ, based on electrolyte charge fraction, ρ, and bulk salt concentration, c b {\displaystyle c_{b}} .
where y s {\displaystyle y_{s}} is the reduced surface potential:
and λ B {\displaystyle \lambda _{B}} is the Bjerrum length :
Since charge plays a key role in polyelectrolyte adsorption, the initial rates of polyelectrolyte adsorption to charged surfaces are often rapid, limited only by the rate of mass-transport (diffusion) to the surface. This high rate then quickly drops off as charge accumulation at the surface occurs, and attractive forces are no longer drawing more polyelectrolyte chains to the surface. This drop in adsorption rates can be countered by exploiting the tendency for charge overcompensation to occur. [ 3 ] In the case of a negatively charged solid surface, cationic polyelectrolate chains are adsorbed to the oppositely charged surface. Their large size and high charge densities tend to overcompensate the original negative surface charge, resulting in a net positive charge due to the cationic polyelectrolytes. This solid surface, with its cationic polyelectrolyte film and consequent positive surface charge, can then be exposed to an anionic polyelectrolyte solution, where the process begins again, creating another film with an oppositely charged surface. This process can then be repeated to create several bilayers on the solid surface.
The effectiveness of polyelectrolyte adsorption is greatly affected by the contents of the solution and by the quality of the solvent in which the polyelectrolytes are dissolved. The primary mechanisms by which the solvent affects the adsorption characteristics of the surface-polymer interface are the dielectric effect of the solvent, the steric attraction or repulsion facilitated by the chemical nature of or species in the solvent, and its temperature. Repulsive steric forces are based on entropy and are caused by the reduced configuration entropy of the polymer chains. [ 1 ] It is difficult to model precisely the interaction that any particular polyelectrolyte solution will exhibit because the steric forces are dependent upon the combination of the chemical makeup of both the polymer and the solvent as well as any ionic species present in the solution.
The interactions between a polyelectrolyte and the solvent it is placed in have a large effect on the conformation of the polymer both in solution and upon deposition onto the substrate. Due to their unique nature, polyelectrolytes have many options for solvents that traditional polymers such as polyethylene, styrene, and others, would not be soluble in. An excellent example of this is water. While water is a high-polarity solvent, it will still dissolve many polyelectrolytes. The conformation of a polyelectrolyte in solution is determined by a balance of the (usually unfavorable) interactions between the solvent and the polymer, and the electrostatic repulsion between the individual repeat units of the polymer. It has been suggested that a polyelectrolyte chain will form an elongated cylindrical globule in order to optimize its energy. Some models go further and postulate that the most efficient configuration is a series of cylindrical globules linking much larger diameter spherical globules in a "necklace" configuration. [ 4 ]
In a good solvent, the electrostatic forces between the repeat units of the polymer and the solvent are favorable. While not entirely intuitive, this causes the polymer to assume a more tightly packed conformation. This is due to the screening the solvent molecules perform between the charged repeat units of the polyelectrolyte, decreasing the electrostatic repulsion the polymer chain experiences. Since the polymer backbone does not repel itself as strongly as it would in a poor solvent, the polymer chain acts more similarly to an uncharged polymer, assuming a compact conformation.
In a poor solvent, the solvent molecules interact poorly or unfavorably with the charged portions of the polyelectrolyte. The inability of the solvent to effectively screen the charges between repeat units causes the polymer to assume a looser conformation due to electrostatic repulsion of its repeat units. These interactions allow for the polymer to be more uniformly deposited onto the substrate.
When an ionic compound is dissolved in the solvent, the ions act to screen the charges on the polyelectrolyte chains. The ionic concentration of the solution will determine the layer formation characteristics of the polyelectrolyte as well as the conformation the polymer assumes in solution.
High salt concentrations cause conditions similar to the interactions experienced by a polymer in a favorable solvent. Polyelectrolytes, while charged, are still mainly non-polar with carbon backbones. While the charges on the polymer backbone exert an electrostatic force that drives the polymer into a more open and loose conformation, if the surrounding solution has a high concentration of salt, then the charge repulsion will be screened. Once this charge is screened the polyelectrolyte will act as any other non-polar polymer would in a high ionic strength solution and begin to minimize interactions with the solvent. This leads to a much more clumped and dense polymer deposited onto the surface.
In a low ionic strength solution, the charges present on the repeat units of the polymer are the dominant force controlling conformation. Since there is very little charge present to screen the repulsive interactions between the repeat units, the polymer assumes a very spread out, loose conformation. This conformation allows for more uniform layering on the substrate, which is helpful in preventing surface defects and non-uniform surface properties.
Polyelectrolytes can be applied to multiple types of surfaces due to the variety of ionic polymers available. They can be applied to solid surfaces in multi-layer form to fulfill a variety of design objectives, they can be used to surround solid particles to enhance the stability of a colloidal system, and they can even be assembled to form an independent structure that can be used to ferry drugs throughout the human body.
Polyelectrolyte multi-layers are a promising area of research in the polymer coating industry because they can be applied in a spray-on fashion at low cost in a water-based solvent. Although the polymers are held to the surface only by electrostatic forces, the multi-layer coatings adhere aggressively under liquid shear. The disadvantage to this coating technology is that the layers have the consistency of a gel and thus are weak against abrasion.
Polyelectrolytes have been used by scientists to coat stainless steel using the layer-by-layer application method in order to inhibit corrosion. The exact mechanism by which corrosion is restricted is unknown because polyelectrolyte multi-layers are water-logged and of a gel-like consistency. One theory is that the layers form a barrier impenetrable to small ions that facilitate corrosion of the steel. Additionally, the water molecules within the multi-layer film are held in a restricted state by the ionic groups of the polyelectrolytes. This decreases the chemical activity of the water at the surface of the steel. [ 10 ]
Many biomedical devices that come into contact with bodily fluids are susceptible to adverse foreign body response, or rejection and thus, failure of the device. The main mechanism of infection is the formation of a biofilm , which is a matrix of sessile bacteria consisting of around 15% bacterial cells by mass and 85% hydrophobic exopolysaccharide fibers. [ 11 ] One way to eliminate this risk is to apply localized treatment to the area in the vicinity of the implant. This can be done by applying a drug-impregnated polyelectrolyte multi-layer to the medical device prior to implantation. The goal with this technology is to create a combination of polyelectrolyte multi-layers where one multi-layer prevents the formation of a biofilm and another releases a small-molecule drug through diffusion. This would be more effective than the current technique of releasing a high dose of drugs into the body and counting on some of it to navigate to the afflicted area. The base layer for an effective coating for an implant is DMLPEI/PAA, or linear N, N-dodecyl,methyl-poly(ethyleneimine) / poly (acrylic acid). [ 7 ]
Another of the major applications of polyelectrolyte adsorption is the stabilization (or destabilization) of solid colloidal suspensions, or sols. Particles in solution tend to have attractive forces similar to van der Waals forces , modeled by Hamaker theory . These forces tend to cause colloidal particles to aggregate or flocculate . The Hamaker attractive effect is balanced by one or both of two repulsive effects of colloids in solution. The first is electrostatic stabilization, in which like charges of the particles repel one another. This effect is due to the zeta potential that exists due to a particle's surface charge in solution. [ 12 ] The second is steric stabilization, due to steric effects . Drawing particles together with adsorbed polymer chains greatly decreases the conformational entropy of the polymer chains at the surface, which is thermodynamically unfavorable, making flocculation and coagulation more difficult.
The adsorption of polyelectrolytes can be used to stabilize suspensions, such as in the case of dyes and paints. It can also be used to destabilize suspensions by adsorbing oppositely charged chains to the particle surface, neutralizing the zeta-potential and causing flocculation or coagulation of contaminants. This is used heavily in waste-water treatment to force suspensions of contaminants to flocculate, allowing them to be filtered. There are a variety of industrial flocculants that are either cationic or anionic in nature for targeting particular species.
An application of the additional stability a polyelectrolyte multi-layer will grant a colloid is the creation of a solid coating for a liquid core. While polyelectrolyte layers are generally adsorbed onto solid substrates, they may also be adsorbed to liquid substrates such as oil in water emulsions or colloids. This process has much potential, but is rife with difficulty. Since colloids are generally stabilized by surfactants , and often ionic surfactants, the adsorption of a multi-layer that is similarly charged to the surfactant causes problems due to the electrostatic repulsion between the polyelectrolyte and the surfactant. This can be circumvented by using non-ionic surfactants; however, the solubility of these non-ionic surfactants in water is greatly decreased compared to ionic surfactants.
These cores, once created, can be used for things such as drug delivery and microreactors . For drug delivery, the polyelectrolyte shell would break down after a certain amount of time, releasing the drug and helping it travel through the digestive tract, which is one of the biggest barriers for the effectiveness of drug delivery. | https://en.wikipedia.org/wiki/Polyelectrolyte_adsorption |
Polyester resins are synthetic resins formed by the reaction of dibasic organic acids and polyhydric alcohols . Maleic anhydride is a commonly used raw material with diacid functionality in unsaturated polyester resins. [ 1 ] Unsaturated polyester resins are used in sheet moulding compound , bulk moulding compound and the toner of laser printers . Wall panels fabricated from polyester resins reinforced with fiberglass —so-called fiberglass reinforced plastic (FRP)—are typically used in restaurants, kitchens, restrooms and other areas that require washable low-maintenance walls. They are also used extensively in cured-in-place pipe applications. Departments of Transportation in the USA also specify them for use as overlays on roads and bridges. In this application they are known AS Polyester Concrete Overlays (PCO). These are usually based on isophthalic acid and cut with styrene at high levels—usually up to 50%. [ 2 ] Polyesters are also used in anchor bolt adhesives though epoxy based materials are also used. [ 3 ] Many companies have and continue to introduce styrene free systems mainly due to odor issues, but also over concerns that styrene is a potential carcinogen. Drinking water applications also prefer styrene free. Most polyester resins are viscous, pale coloured liquids consisting of a solution of a polyester in a reactive diluent which is usually styrene, [ 4 ] but can also include vinyl toluene and various acrylates . [ 5 ] [ 6 ]
Unsaturated polyesters are condensation polymers formed by the reaction of polyols (also known as polyhydric alcohols ), organic compounds with multiple alcohol or hydroxy functional groups, with unsaturated and in some cases saturated dibasic acids. Typical polyols used are glycols including ethylene glycol , propylene glycol , and diethylene glycol ; typical acids used are phthalic acid , isophthalic acid , terephthalic acid , and maleic anhydride . Water, a condensation by-product of esterification reactions, is continuously removed by distillation, driving the reaction to completion via Le Chatelier's principle . Unsaturated polyesters are generally sold to parts manufacturers as a solution of resin in reactive diluent; styrene is the most common diluent and the industry standard. The diluent allows control over the viscosity of the resin, and is also a participant in the curing reaction. The initially liquid resin is converted to a solid by cross-linking chains. This is done by creating free radicals at unsaturated bonds, which propagate in a chain reaction to other unsaturated bonds in adjacent molecules, linking them in the process. Unsaturation is generally in the form of maleate and fumarate species along the polymer chain. Maleate/fumarate generally does not self-polymerize via radical reactions, but readily reacts with styrene. Maleic anhydride and styrene are known to form alternating copolymers , and are in fact the textbook case of this phenomenon. This is one reason that styrene has been so hard to displace in the market as the industry standard reactive diluent for unsaturated polyester resins, despite increasing efforts to displace the material such as California's Proposition 65 . The initial free radicals are induced by adding a compound that easily decomposes into free radicals. This compound is known as the catalyst [ 7 ] within the industry, but initiator is a more appropriate term. Transition metal salts are usually added as a catalyst for the chain-growth crosslinking reaction, and in the industry this type of additive is known as a promoter; the promoter is generally understood to lower the bond dissociation energy of the radical initiator. Cobalt salts are the most common type of promoter used. Common radical initiators used are organic peroxides such as benzoyl peroxide or methyl ethyl ketone peroxide . [ 8 ]
Polyester resins are thermosetting and, as with other resins, cure exothermically. The use of excessive initiator especially with a catalyst present can, therefore, cause charring or even ignition during the curing process. Excessive catalyst may also cause the product to fracture or form a rubbery material.
Unsaturated polyesters (UPR) are utilized in many different industrially relevant markets, but in general are used as the matrix material for various types of composites . Glass fiber-reinforced composites comprise the largest segment into which UPRs are used and can be processed via SMC , BMC , pultrusion , cured-in-place pipe (known as relining in Europe), filament winding , vacuum molding , spray-up molding , resin transfer molding (RTM) . Wind turbine blades also use them [ 9 ] as well as many more processes. UPRs are also used in non-reinforced applications with common examples being gel coats , shirt buttons, mine-bolts , bowling ball cores , polymer concrete , and engineered stone/cultured marble . [ 10 ]
In organic chemistry, an ester is formed as the condensation product of a carboxylic acid and an alcohol , with water formed as the condensate by-product. An ester can also be produced with an acyl halide and an alcohol, in which case the condensate by-product is a hydrogen halide .
Polyesters are a category of polymers in which ester functionality repeats within the main chain. Polyesters are a classic example of step-growth polymer , in which a difunctional (or higher order) acid or acyl halide is reacted with a difunctional (or higher order) alcohol. Polyesters are produced commercially both as saturated and unsaturated resins. The most common and highest volume produced polyester is Polyethylene terephthalate (PET) , which is an example of a saturated polyester and finds utilization in such applications as fibers for clothing and carpet, food and liquid containers (such as a water/soda bottles), as well as films. [ 11 ]
In unsaturated polyester (UPR) chemistry, unsaturation sites are present along the chain, usually by incorporation of maleic anhydride, but maleic acid and fumaric acid are also used. Maleic acid and fumaric acid are isomers where maleic is the cis-isomer and fumaric is the trans-isomer. The ester forms of these two molecules are maleate and fumarate, respectively. When curing a UPR, the fumarate form is known to react more rapidly with the styrene radical, so isomerization catalysts, such as N,N-dimethylacetoacetamide (DMAA), are often employed in the synthesis process which converts the maleates into fumarates; the isomerization can also be encouraged with increased reaction time and temperature.
Within the UPR industry, the classification of the resins is generally based on the primary saturated acid. For example, a resin containing primarily terephthalic acid is known as a Tere resin, a resin containing primarily phthalic anhydride is known as an Ortho resin, and a resin containing primarily isophthalic acid is known as an Iso resin. Dicyclopentadiene (DCPD) is also a common UPR raw material, and can be incorporated two different ways. In one process, the DCPD is cracked in situ to form cyclopentadiene which can then be reacted with maleate/fumarate groups along the polymer chain via a Diels-alder reaction . This type of resin is known as a Nadic resin and is referred to as a poor man's Ortho, due to sharing many similar properties of an Ortho resin along with the extremely low cost of DCPD raw material. In another process, maleic anhydride is first opened with water or another alcohol to form maleic acid and is then reacted with DCPD where an alcohol from the maleic acid reacts across one of the double bonds of the DCPD. This product is then used to end-cap the UPR resin which yields a product with unsaturation on the end-groups. This type of resin is referred to as a DCPD resin.
Ortho resins comprise the most common type of UPR, and many are known as general purpose resins. FRP composites utilizing ortho resins are found in such application as boat hulls, bath ware, and bowling ball cores.
Iso resins are generally on the higher end of UPR products, both because of the relatively higher cost of the isophthalic acid as well as the superior properties they possess. Iso resins are the primary type of resin used in gel coat applications, which is similar to a paint, but is sprayed into a mold before the FRP is molded leaving a coating on the part. Gel coat resins must have lower color (almost clear) so as to not impart additional color to the part or so that they can be dyed properly. Gel coats must also have strong resistance to UV-weathering and water blistering.
Tere resins are often used when high modulus and strength are desired, but the low color properties of an Iso resin is not necessary. Terephthalic acid is generally lower cost than isophthalic acid, but both give similar strength characteristics to a UPR product. There exists a special sub-set of Tere resins, known as PET UPR resins, which are produced by catalytically cracking PET resin in the reactor to yield a mixture of terephthalic acid and ethylene glycol. Additional acids and glycols are then added along with maleic anhydride and a new polymer is produced. The end product is functionally the same as a Tere resin, but can often be lower cost to manufacture as scrap PET can be sourced cheaply. If a glycol-modified PET (PET-G) is used, exceptional properties can be imparted to the resin due to some of the exotic materials used in PET-G production. Tere and PET-UPR resins are used in many applications including cured-in-place pipe. [ 12 ]
Lichens have been shown to deteriorate polyester resins, as can be seen in archaeological sites in the Roman city of Baelo Claudia Spain. [ 13 ]
Polyester resin offers the following advantages:
Polyester resin has the following disadvantages: | https://en.wikipedia.org/wiki/Polyester_resin |
Polyesteramides are a class of synthetic polymers connected by ester and amide bonds. [ 1 ]
Common polyesteramides can be separated in to two different types. [ 2 ]
According to Rainer Höfer, nylon-type polyesteramides can be synthesized through the polymerisation of caprolactam or caprolactone , or through polycondensation of synthetic alcohols like 1,4-butanediol . Nylon-type polyesteramides have been investigated for their use in drug delivery systems and smart materials . [ 2 ]
Höfer described oil-based polyesteramides as "products of a fatty acid alkanolamide with a dicarboxylic acid (anhydride) such as terephthalic acid or pthalic acid anhydride". These polyesteramides are often manufactured from regional vegetable oils including neem oil . [ 2 ] | https://en.wikipedia.org/wiki/Polyesteramide |
Polyether block amide or PEBA is a thermoplastic elastomer (TPE). It is known under the tradename of PEBAX® ( Arkema ) and VESTAMID® E ( Evonik Industries ). It is a block copolymer obtained by polycondensation of a carboxylic acid polyamide ( PA6 , PA11 , PA12 ) with an alcohol termination polyether ( Polytetramethylene glycol PTMG), PEG ). The general chemical structure is:
PEBA is a high performance thermoplastic elastomer. It is used to replace common elastomers – thermoplastic polyurethanes, polyester elastomers, and silicones - for these characteristics: lower density among TPE, superior mechanical and dynamic properties (flexibility, impact resistance , energy return , fatigue resistance ) and keeping these properties at low temperature (lower than -40 ° C ), and good resistance against a wide range of chemicals. It is sensitive to UV degradation , however.
PEBA is found in the sports equipment market: for damping system components and midsoles of high end shoes (running, track & field, football, baseball, basketball, trekking, etc.) where it is appreciated for its low density, damping properties, energy return and flexibility. PEBA is also appreciated by winter sports participants as it enables design of the lightest alpine and Nordic ski boots while providing some resistance to extreme environment (low temperatures, UV exposure, moisture). It is used in various other sports applications such as racquet grommets and golf balls.
PEBA is used in medical products such as catheters for its flexibility, its good mechanical properties at low and high temperatures, and its softness.
It is also widely used in the manufacture of electric and electronic goods such as cables and wire coatings, electronic device casings, components, etc.
PEBA can be used to make textiles as well as breathable film, fresh feeling fibres or non-woven fabrics.
Some hydrophilic grades of PEBA are also used for their antistatic and antidust properties. Since no chemical additives are required to achieve these properties, products can be recycled at end of life.
Eustache, R.-P. (2005). "Poly(Ether –b-Amide) TPE: Structure, Properties and Applications". In Fakirov, Stoyko (ed.). Handbook of Condensation Thermoplastic Elastomers . Wiley. doi : 10.1002/3527606610.ch10 . | https://en.wikipedia.org/wiki/Polyether_block_amide |
Polyetherketones (PEK for short) are polymers whose molecular backbone contain alternating ketone (R-CO-R) and Ether (R-O-R) functionalities. The most common are Polyaryletherketones (PAEK), in which there is an aryl group linked in the (1–4)-position between each of the functional groups. The backbone, which is thus very rigid, gives the materials very high glass transition and melting temperatures compared to other plastics .
Polyetherketones can be obtained by condensation of 4,4′-difluorobenzophenone and potassium or sodium salt of hydroquinone :
The most common of these high-temperature resistant materials is polyetheretherketone (PEEK).
Other types of polyetherketone are:
Space and aviation : aircraft parts (fins, wing flaps, nose caps, seats). Replacements for metal parts, also in the military field.
Machinery and automotive industry : high-performance molded parts such as bearing cages, gears , sealing rings, valve spring retainers, impellers . Coatings when high resistance to temperatures above 200 °C is required. Coatings made of PEEK or PEK, for example, are suitable for applications up to 230 °C (450 °F). [ 1 ]
Electronics industry : wire and cable sheathing , flexible printed circuit boards , semiconductor production , offshore connectors.
Medical technology : endoscope handles, hip joint prostheses . [ 2 ] Because polyetherketones can be sterilized without damaging them, PEK is often used for surgical applications. [ 3 ]
PEK has a high temperature resistance . It is also characterized by high wear resistance. [ 4 ] In addition, polyetherketones are highly resistant to chemicals: They are resistant to non-oxidizing acids , grease , lubricants , water vapor , hot water, and concentrated alkalis . [ 5 ] | https://en.wikipedia.org/wiki/Polyetherketones |
Polyethylene glycol ( PEG ; / ˌ p ɒ l i ˈ ɛ θ əl ˌ iː n ˈ ɡ l aɪ ˌ k ɒ l , - ˈ ɛ θ ɪ l -, - ˌ k ɔː l / ) is a polyether compound derived from petroleum with many applications, from industrial manufacturing to medicine . PEG is also known as polyethylene oxide ( PEO ) or polyoxyethylene ( POE ), depending on its molecular weight . The structure of PEG is commonly expressed as H−(O−CH 2 −CH 2 ) n −OH. [ 3 ]
PEO's [ clarification needed ] have "very low single dose oral toxicity", on the order of tens of grams per kilogram of human body weight when ingested by mouth. [ 3 ] Because of its low toxicity, PEO is used in a variety of edible products. [ 41 ] It is also used as a lubricating coating for various surfaces in aqueous and non-aqueous applications. [ 42 ]
The precursor to PEGs is ethylene oxide , which is hazardous. [ 43 ] Ethylene glycol and its ethers are nephrotoxic ( poisonous to the kidneys ) if applied to damaged skin. [ 44 ]
The United States Food and Drug Administration (FDA or US FDA) regards PEG as biologically inert and safe. [ citation needed ]
A 2015 study appears to challenge the FDA's conclusion. In the study, a high-sensitivity ELISA assay detected anti-PEG antibodies in 72% of random blood plasma samples collected from 1990 to 1999. According to the study's authors, this result suggests that anti-PEG antibodies may be present, typically at low levels, in people who were never treated with PEGylated drugs. [ 45 ] [ 46 ] Due to its ubiquity in many products and the large percentage of the population with antibodies to PEG, which indicates an allergic reaction, hypersensitive reactions to PEG are an increasing health concern. [ 47 ] [ 48 ] Allergy to PEG is usually discovered after a person has been diagnosed with an allergy to several seemingly unrelated products—including processed foods, cosmetics, drugs, and other substances—that contain or were manufactured with PEG. [ 47 ]
PEG , PEO , and POE refer to an oligomer or polymer of ethylene oxide . The three names are chemically synonymous, but historically PEG is preferred in the biomedical field, whereas PEO is more prevalent in the field of polymer chemistry. Because different applications require different polymer chain lengths, PEG has tended to refer to oligomers and polymers with a molecular mass below 20,000 g/mol, PEO to polymers with a molecular mass above 20,000 g/mol, and POE to a polymer of any molecular mass. [ 49 ] PEGs are prepared by polymerization of ethylene oxide and are commercially available over a wide range of molecular weights from 300 g/mol to 10,000,000 g/mol. [ 50 ]
PEG and PEO are liquids or low-melting solids, depending on their molecular weights . While PEG and PEO with different molecular weights find use in different applications and have different physical properties (e.g. viscosity ) due to chain length effects, their chemical properties are nearly identical. Different forms of PEG are also available, depending on the initiator used for the polymerization process – the most common initiator is a monofunctional methyl ether PEG, or methoxypoly(ethylene glycol), abbreviated mPEG. Lower-molecular-weight PEGs are also available as purer oligomers, referred to as monodisperse, uniform, or discrete. Very high-purity PEG has recently been shown to be crystalline, allowing the determination of a crystal structure by x-ray crystallography . [ 50 ] Since purification and separation of pure oligomers is difficult, the price for this type of quality is often 10–1000 fold that of polydisperse PEG.
PEGs are also available with different geometries.
The numbers that are often included in the names of PEGs indicate their average molecular weights (e.g. a PEG with n = 9 would have an average molecular weight of approximately 400 daltons , and would be labeled PEG 400 ). Most PEGs include molecules with a distribution of molecular weights (i.e. they are polydisperse). The size distribution can be characterized statistically by its weight average molecular weight ( M w ) and its number average molecular weight ( M n ), the ratio of which is called the polydispersity index ( Đ M ). M w and M n can be measured by mass spectrometry .
PEGylation is the act of covalently coupling a PEG structure to another larger molecule, for example, a therapeutic protein , which is then referred to as a PEGylated protein. PEGylated interferon alfa-2a or alfa-2b are commonly used injectable treatments for hepatitis C infection.
PEG is soluble in water , methanol , ethanol , acetonitrile , benzene , and dichloromethane , and is insoluble in diethyl ether and hexane . It is coupled to hydrophobic molecules to produce non-ionic surfactants . [ 51 ]
PEG and related polymers (PEG phospholipid constructs) are often sonicated when used in biomedical applications. However, as reported by Murali et al., PEG is very sensitive to sonolytic degradation and PEG degradation products can be toxic to mammalian cells. It is, thus, imperative to assess potential PEG degradation to ensure that the final material does not contain undocumented contaminants that can introduce artifacts into experimental results. [ 52 ]
PEGs and methoxypolyethylene glycols are manufactured by Dow Chemical under the trade name Carbowax for industrial use, and Carbowax Sentry for food and pharmaceutical use. They vary in consistency from liquid to solid, depending on the molecular weight, as indicated by a number following the name. They are used commercially in numerous applications, including foods, cosmetics , pharmaceutics, biomedicine , dispersing agents, solvents, ointments , suppository bases, as tablet excipients , and as laxatives . Some specific groups are lauromacrogols , nonoxynols , octoxynols , and poloxamers .
The production of polyethylene glycol was first reported in 1859. Both A. V. Lourenço and Charles Adolphe Wurtz independently isolated products that were polyethylene glycols. [ 53 ] Polyethylene glycol is produced by the interaction of ethylene oxide with water, ethylene glycol , or ethylene glycol oligomers. [ 54 ] The reaction is catalyzed by acidic or basic catalysts. Ethylene glycol and its oligomers are preferable as a starting material instead of water because they allow the creation of polymers with a low polydispersity (narrow molecular weight distribution). Polymer chain length depends on the ratio of reactants.
Depending on the catalyst type, the mechanism of polymerization can be cationic or anionic. The anionic mechanism is preferable because it allows one to obtain PEG with a low polydispersity . Polymerization of ethylene oxide is an exothermic process. Overheating or contaminating ethylene oxide with catalysts, such as alkalis or metal oxides, can lead to runaway polymerization, which can end in an explosion after a few hours.
Polyethylene oxide, or high-molecular-weight polyethylene glycol, is synthesized by suspension polymerization . It is necessary to hold the growing polymer chain in solution in the course of the polycondensation process. The reaction is catalyzed by magnesium-, aluminium-, or calcium-organoelement compounds. To prevent coagulation of polymer chains from solution, chelating additives, such as dimethylglyoxime , are used.
Alkaline catalysts, such as sodium hydroxide (NaOH), potassium hydroxide (KOH), or sodium carbonate (Na 2 CO 3 ), are used to prepare low-molecular-weight polyethylene glycol. [ 55 ] | https://en.wikipedia.org/wiki/Polyethylene_glycol |
Polyethylene terephthalate (or poly(ethylene terephthalate) , [ 5 ] PET , PETE , or the obsolete PETP or PET-P ), is the most common thermoplastic polymer resin of the polyester family and is used in fibres for clothing, containers for liquids and foods, and thermoforming for manufacturing, and in combination with glass fibre for engineering resins . [ 6 ]
In 2016, annual production of PET was 56 million tons. [ 7 ] The biggest application is in fibres (in excess of 60%), with bottle production accounting for about 30% of global demand. [ 8 ] In the context of textile applications, PET is referred to by its common name, polyester , whereas the acronym PET is generally used in relation to packaging. PET used in non-fiber applications (i.e. for packaging) makes up about 6% of world polymer production by mass. Accounting for the >60% fraction of polyethylene terephthalate produced for use as polyester fibers, PET is the fourth-most-produced polymer after polyethylene (PE), polypropylene (PP) and polyvinyl chloride (PVC). [ 8 ] [ 5 ]
PET consists of repeating (C 10 H 8 O 4 ) units. PET is commonly recycled , and has the digit 1 (♳) as its resin identification code (RIC). The National Association for PET Container Resources (NAPCOR) defines PET as: "Polyethylene terephthalate items referenced are derived from terephthalic acid (or dimethyl terephthalate ) and mono ethylene glycol , wherein the sum of terephthalic acid (or dimethyl terephthalate) and mono ethylene glycol reacted constitutes at least 90 percent of the mass of monomer reacted to form the polymer, and must exhibit a melting peak temperature between 225 °C and 255 °C, as identified during the second thermal scan in procedure 10.1 in ASTM D3418, when heating the sample at a rate of 10 °C/minute." [ 9 ]
Depending on its processing and thermal history, polyethylene terephthalate may exist both as an amorphous (transparent) and as a semi-crystalline polymer . The semicrystalline material might appear transparent (particle size less than 500 nm ) or opaque and white (particle size up to a few micrometers ) depending on its crystal structure and particle size.
One process for making PET uses bis(2-hydroxyethyl) terephthalate , which can be synthesized by the esterification reaction between terephthalic acid and ethylene glycol with water as a byproduct (this is also known as a condensation reaction), or by transesterification reaction between ethylene glycol and dimethyl terephthalate (DMT) with methanol as a byproduct. It can also be obtained by recycling of PET itself. [ 10 ] Polymerization is through a polycondensation reaction of the monomers (done immediately after esterification/transesterification) with water as the byproduct. [ 6 ]
Polyester fibres are widely used in the textile industry. The invention of the polyester fibre is attributed to J. R. Whinfield. [ 11 ] It was first commercialized in the 1940s by ICI , under the brand 'Terylene'. [ 12 ] Subsequently E. I. DuPont launched the brand 'Dacron'. As of 2022, there are many brands around the world, mostly Asian.
Polyester fibres are used in fashion apparel often blended with cotton, as heat insulation layers in thermal wear, sportswear and workwear and automotive upholstery.
Plastic bottles made from PET are widely used for soft drinks , both still and sparkling . For beverages that are degraded by oxygen, such as beer, a multilayer structure is used. PET sandwiches an additional polyvinyl alcohol (PVOH) or polyamide (PA) layer to further reduce its oxygen permeability.
Non-oriented PET sheet can be thermoformed to make packaging trays and blister packs . [ 13 ] Both amorphous PET and BoPET are transparent to the naked eye. Color-conferring dyes can easily be formulated into PET sheet.
PET is permeable to oxygen and carbon dioxide and this imposes shelf life limitations of contents packaged in PET. [ 14 ] : 104
In the early 2000s, the global PET packaging market grew at a compound annual growth rate of 9% to €17 billion in 2006. [ 15 ]
Biaxially oriented PET (BOPET) film (including brands like "Mylar") can be aluminized by evaporating a thin film of metal onto it to reduce its permeability, and to make it reflective and opaque ( MPET ). These properties are useful in many applications, including flexible food packaging and thermal insulation (such as space blankets ).
BOPET is used in the backsheet of photovoltaic modules . Most backsheets consist of a layer of BOPET laminated to a fluoropolymer or a layer of UV stabilized BOPET. [ 16 ]
PET is also used as a substrate in thin film solar cells.
PET can be compounded with glass fibre and crystallization accelerators, to make thermoplastic resins. These can be injection moulded into parts such as housings, covers, electrical appliance components and elements of the ignition system. [ 17 ]
PET is stoichiometrically a mixture of carbon and H 2 O , and therefore has been used in an experiment involving laser-driven shock compression which created nanodiamonds and superionic water . This could be a possible way of producing nanodiamonds commercially. [ 18 ] [ 19 ]
PET was patented in 1941 by John Rex Whinfield , James Tennant Dickson and their employer the Calico Printers' Association of Manchester , England. E. I. DuPont de Nemours in Delaware, United States, first produced Dacron (PET fiber) in 1950 and used the trademark Mylar (boPET film) in June 1951 and received registration of it in 1952. [ 28 ] [ 29 ] It is still the best-known name used for polyester film. The current owner of the trademark is DuPont Teijin Films. [ 30 ]
In the Soviet Union, PET was first manufactured in the laboratories of the Institute of High-Molecular Compounds of the USSR Academy of Sciences in 1949, and its name "Lavsan" is an acronym thereof ( ла боратории Института в ысокомолекулярных с оединений А кадемии н аук СССР). [ 31 ]
The PET bottle was invented in 1973 by Nathaniel Wyeth [ 32 ] and patented by DuPont. [ 33 ]
PET in its most stable state is a colorless, semi-crystalline resin . However it is intrinsically slow to crystallize compared to other semicrystalline polymers . Depending on processing conditions it can be formed into either non-crystalline ( amorphous ) or crystalline articles. Its amenability to drawing in manufacturing makes PET useful in fibre and film applications. Like most aromatic polymers , it has better barrier properties [ clarification needed ] than aliphatic polymers . It is strong and impact-resistant. PET is hygroscopic and absorbs water. [ 34 ]
About 60% crystallization is the upper limit for commercial products, with the exception of polyester fibers. [ clarification needed ] Transparent products can be produced by rapidly cooling molten polymer below the glass transition temperature (T g ) to form a non-crystalline amorphous solid . [ 35 ] Like glass, amorphous PET forms when its molecules are not given enough time to arrange themselves in an orderly, crystalline fashion as the melt is cooled. While at room temperature the molecules are frozen in place, if enough heat energy is put back into them afterward by heating the material above T g , they can begin to move again, allowing crystals to nucleate and grow. This procedure is known as solid-state crystallization. [ citation needed ] Amorphous PET also crystallizes and becomes opaque when exposed to solvents , such as chloroform or toluene . [ 36 ]
A more crystalline product can be produced by allowing the molten polymer to cool slowly. Rather than forming one large single crystal, this material has a number of spherulites (crystallized areas) each containing many small crystallites (grains). Light tends to scatter as it crosses the boundaries between crystallites and the amorphous regions between them, causing the resulting solid to be translucent. [ citation needed ] Orientation also renders polymers more transparent. [ clarification needed ] This is why BOPET film and bottles are both crystalline, to a degree, and transparent. [ citation needed ]
PET has an affinity for hydrophobic flavors, and drinks sometimes need to be formulated with a higher flavor dosage, compared to those going into glass, to offset the flavor taken up by the container. [ 37 ] : 115 While heavy gauge PET bottles returned for re-use, as in some EU countries, the propensity of PET to absorb flavors makes it necessary to conduct a "sniffer test" on returned bottles to avoid cross-contamination of flavors. [ 37 ] : 115
Different applications of PET require different degrees of polymerization, which can be obtained by modifying the process conditions. The molecular weight of PET is measured by solution viscosity. [ clarification needed ] The preferred method to measure this viscosity is the intrinsic viscosity (IV) of the polymer. [ 38 ] Intrinsic viscosity is a dimensionless measurement found by extrapolating the relative viscosity (measured in (dℓ/g)) to zero concentration. Shown below are the IV ranges for common applications: [ 39 ]
PET is often copolymerized with other diols or diacids to optimize the properties for particular applications. [ 40 ] [ 41 ]
For example, cyclohexanedimethanol (CHDM) can be added to the polymer backbone in place of ethylene glycol . Since this building block is much larger (six additional carbon atoms) than the ethylene glycol unit it replaces, it does not fit in with the neighboring chains the way an ethylene glycol unit would. This interferes with crystallization and lowers the polymer's melting temperature. In general, such PET is known as PETG or PET-G (polyethylene terephthalate glycol-modified). It is a clear amorphous thermoplastic that can be injection-molded, sheet-extruded or extruded as filament for 3D printing . PETG can be colored during processing.
Another common modifier is isophthalic acid , replacing some of the 1,4-( para- ) linked terephthalate units. The 1,2-( ortho- ) or 1,3-( meta -) linkage produces an angle in the chain, which also disturbs crystallinity.
Such copolymers are advantageous for certain molding applications, such as thermoforming , which is used for example to make tray or blister packaging from co-PET film, or amorphous PET sheet (A-PET/PETA) or PETG sheet. On the other hand, crystallization is important in other applications where mechanical and dimensional stability are important, such as seat belts. For PET bottles, the use of small amounts of isophthalic acid, CHDM, diethylene glycol (DEG) or other comonomers can be useful: if only small amounts of comonomers are used, crystallization is slowed but not prevented entirely. As a result, bottles are obtainable via stretch blow molding ("SBM"), which are both clear and crystalline enough to be an adequate barrier to aromas and even gases, such as carbon dioxide in carbonated beverages.
Polyethylene terephthalate is produced largely from purified terephthalic acid (PTA), as well as to a lesser extent from (mono-)ethylene glycol (MEG) and dimethyl terephthalate (DMT). [ 42 ] [ 6 ] As of 2022, ethylene glycol is made from ethene found in natural gas , while terephthalic acid comes from p-xylene made from crude oil . Typically an antimony or titanium compound is used as a catalyst, a phosphite is added as a stabilizer and a bluing agent such as cobalt salt is added to mask any yellowing. [ 43 ]
In the dimethyl terephthalate (DMT) process, DMT and excess ethylene glycol (MEG) are transesterified in the melt at 150–200 °C with a basic catalyst . Methanol (CH 3 OH) is removed by distillation to drive the reaction forward. Excess MEG is distilled off at higher temperature with the aid of vacuum. The second transesterification step proceeds at 270–280 °C, with continuous distillation of MEG as well. [ 42 ]
The reactions can be summarized as follows:
In the terephthalic acid process, MEG and PTA are esterified directly at moderate pressure (2.7–5.5 bar) and high temperature (220–260 °C). Water is eliminated in the reaction, and it is also continuously removed by distillation : [ 42 ]
Bio-PET is the bio-based counterpart of PET. [ 44 ] [ 45 ] Essentially in Bio-PET, the MEG is manufactured from ethylene derived from sugar cane ethanol . A better process based on oxidation of ethanol has been proposed, [ 46 ] and it is also technically possible to make PTA from readily available bio-based furfural . [ 47 ]
There are two basic molding methods for PET bottles, one-step and two-step. In two-step molding, two separate machines are used. The first machine injection molds the preform, which resembles a test tube, with the bottle-cap threads already molded into place. The body of the tube is significantly thicker, as it will be inflated into its final shape in the second step using stretch blow molding .
In the second step, the preforms are heated rapidly and then inflated against a two-part mold to form them into the final shape of the bottle. Preforms (uninflated bottles) are now also used as robust and unique containers themselves; besides novelty candy, some Red Cross chapters distribute them as part of the Vial of Life program to homeowners to store medical history for emergency responders.
The two-step process lends itself to third party production remote from the user site. The preforms can be transported and stored by the thousand in a much smaller space than would finished containers, for the second stage to be carried out on the user site on a 'just in time' basis.
In one-step machines, the entire process from raw material to finished container is conducted within one machine, making it especially suitable for molding non-standard shapes (custom molding), including jars, flat oval, flask shapes, etc. Its greatest merit is the reduction in space, product handling and energy, and far higher visual quality than can be achieved by the two-step system. [ citation needed ]
PET is subject to degradation during processing. If the moisture level is too high, hydrolysis will reduce the molecular weight by chain scission , resulting in brittleness. If the residence time and/or melt temperature (temperature at melting) are too high, then thermal degradation or thermooxidative degradation will occur resulting in discoloration and reduced molecular weight, as well as the formation of acetaldehyde , and the formation "gel" or "fish-eye" formations through cross-linking . Mitigation measures include copolymerisation with other monomers like CHDM or isophthalic acid , which lower the melting point and thus the melt temperature of the resin, as well as the addition of polymer stabilisers such as phosphites . [ 48 ]
Acetaldehyde , which can form by degradation of PET after mishandling of the material, is a colorless, volatile substance with a fruity smell. Although it forms naturally in some fruit, it can cause an off-taste in bottled water. As well as high temperatures (PET decomposes above 300 °C or 570 °F) and long barrel residence times, high pressures and high extruder speeds (which cause shear raising the temperature), can also contribute to the production of acetaldehyde. Photo-oxidation can also cause the gradual formation acetaldehyde over the object's lifespan. This proceeds via a Type II Norrish reaction . [ 49 ]
When acetaldehyde is produced, some of it remains dissolved in the walls of a container and then diffuses into the product stored inside, altering the taste and aroma. This is not such a problem for non-consumables (such as shampoo), for fruit juices (which already contain acetaldehyde), or for strong-tasting drinks like soft drinks. For bottled water, however, low acetaldehyde content is quite important, because if nothing masks the aroma, even extremely low concentrations (10–20 parts per billion in the water) of acetaldehyde can produce an off-taste. [ 50 ]
Commentary published in Environmental Health Perspectives in April 2010 suggested that PET might yield endocrine disruptors under conditions of common use and recommended research on this topic. [ 51 ] Proposed mechanisms include leaching of phthalates as well as leaching of antimony .
An article published in Journal of Environmental Monitoring in April 2012 concludes that antimony concentration in deionized water stored in PET bottles stays within EU's acceptable limit even if stored briefly at temperatures up to 60 °C (140 °F), while bottled contents (water or soft drinks) may occasionally exceed the EU limit after less than a year of storage at room temperature. [ 52 ] [ 53 ]
Antimony (Sb) is a metalloid element that is used as a catalyst in the form of compounds such as antimony trioxide (Sb 2 O 3 ) or antimony triacetate in the production of PET. After manufacturing, a detectable amount of antimony can be found on the surface of the product. This residue can be removed with washing. Antimony also remains in the material itself and can, thus, migrate out into food and drinks. Exposing PET to boiling or microwaving can increase the levels of antimony significantly, possibly above US EPA maximum contamination levels. [ 54 ] The drinking water limit assessed by WHO is 20 parts per billion (WHO, 2003), and the drinking water limit in the United States is 6 parts per billion. [ 55 ] Although antimony trioxide is of low toxicity when taken orally, [ 56 ] its presence is still of concern. The Swiss Federal Office of Public Health investigated the amount of antimony migration, comparing waters bottled in PET and glass: The antimony concentrations of the water in PET bottles were higher, but still well below the allowed maximum concentration. The Swiss Federal Office of Public Health concluded that small amounts of antimony migrate from the PET into bottled water, but that the health risk of the resulting low concentrations is negligible (1% of the " tolerable daily intake " determined by the WHO ). A later (2006) but more widely publicized study found similar amounts of antimony in water in PET bottles. [ 57 ] The WHO has published a risk assessment for antimony in drinking water. [ 56 ]
Fruit juice concentrates (for which no guidelines are established), however, that were produced and bottled in PET in the UK were found to contain up to 44.7 μg/L of antimony, well above the EU limits for tap water of 5 μg/L. [ 58 ]
Clothing sheds microfibres in use, during washing and machine drying. Plastic litter slowly forms small particles. Microplastics which are present on the bottom of the river or seabed can be ingested by small marine life, thus entering the food chain. As PET has a higher density than water, a significant amount of PET microparticles may be precipitated in sewage treatment plants. PET microfibers generated by apparel wear, washing or machine drying can become airborne, and be dispersed into fields, where they are ingested by livestock or plants and end up in the human food supply. An study published in the journal Science of The Total Environment found PET accounted for 18% of microplastics in human lung tissue samples, and that there were 0.69 ± 0.84 microplastics per gram of lung tissue. [ 59 ] SAPEA have declared that such particles 'do not pose a widespread risk'. [ 60 ] PET is known to degrade when exposed to sunlight and oxygen. [ 61 ] As of 2016, scarce information exists regarding the life-time of the synthetic polymers in the environment. [ 62 ]
While most thermoplastics can, in principle, be recycled, PET bottle recycling is more practical than many other plastic applications because of the high value of the resin and the almost exclusive use of PET for widely used water and carbonated soft drink bottling. [ 63 ] [ 64 ] PET bottles lend themselves well to recycling (see below). In many countries PET bottles are recycled to a substantial degree, [ 63 ] for example about 75% in Switzerland. [ 65 ] The term rPET is commonly used to describe the recycled material, though it is also referred to as R-PET or post-consumer PET (POSTC-PET). [ 66 ] [ 67 ]
The prime uses for recycled PET are polyester fiber, strapping, and non-food containers. [ citation needed ] Because of the recyclability of PET and the relative abundance of post-consumer waste in the form of bottles, PET is also rapidly gaining market share as a carpet fiber. [ 68 ] PET, like many plastics, is also an excellent candidate for thermal disposal ( incineration ), as it is composed of carbon, hydrogen, and oxygen, with only trace amounts of catalyst elements (but no sulfur). [ citation needed ] In general, PET can either be chemically recycled into its original raw materials (PTA, DMT, and EG), destroying the polymer structure completely; [ citation needed ] mechanically recycled into a different form, without destroying the polymer; [ citation needed ] or recycled in a process that includes transesterification and the addition of other glycols, polyols, or glycerol to form a new polyol. The polyol from the third method can be used in polyurethane (PU foam) production, [ 69 ] [ 70 ] [ 71 ] [ 72 ] or epoxy-based products, including paints. [ 73 ]
In 2023 a process was announced for using PET as the basis for supercapacitor production. PET, being stoichiometrically carbon and H 2 O , can be turned into a form of carbon containing sheets and nanospheres, with a very high surface area. The process involves holding a mixture of PET, water, nitric acid , and ethanol at a high temperature and pressure for eight hours, followed by centrifugation and drying. [ 74 ] [ 75 ]
Significant investments were announced in 2021 and 2022 for chemical recycling of PET by glycolysis, methanolysis, [ 76 ] [ 77 ] and enzymatic recycling [ 78 ] to recover monomers. Initially these will also use bottles as feedstock but it is expected that fibres will also be recycled this way in future. [ 79 ]
PET is also a desirable fuel for waste-to-energy plants , as it has a high calorific value which helps to reduce the use of primary resources for energy generation. [ 80 ]
At least one species of bacterium in the genus Nocardia can degrade PET with an esterase enzyme. [ 81 ] Esterases are enzymes able to cleave the ester bond between two oxygens that links subunits of PET. [ 81 ] The initial degradation of PET can also be achieved esterases expressed by Bacillus , as well as Nocardia . [ 82 ] Japanese scientists have isolated another bacterium, Ideonella sakaiensis , that possesses two enzymes which can break down the PET into smaller pieces digestible by the bacteria. A colony of I. sakaiensis can disintegrate a plastic film in about six weeks. [ 83 ] [ 84 ] French researchers report developing an improved PET hydrolase that can depolymerize (break apart) at least 90 percent of PET in 10 hours, breaking it down into individual monomers . [ 85 ] [ 86 ] [ 87 ] Also, an enzyme based on a natural PET-ase was designed with the help of a machine learning algorithm to be able to tolerate pH and temperature changes by the University of Texas at Austin . The PET-ase was found to able to degrade various products and could break them down as fast as 24 hours. [ 88 ] [ 89 ] | https://en.wikipedia.org/wiki/Polyethylene_terephthalate |
Polyethylenimine ( PEI ) or polyaziridine is a polymer with repeating units composed of the amine group and two carbon aliphatic CH 2 CH 2 spacers. Linear polyethyleneimines contain all secondary amines , in contrast to branched PEIs which contain primary, secondary and tertiary amino groups. Totally branched, dendrimeric forms were also reported. [ 1 ] PEI is produced on an industrial scale and finds many applications usually derived from its polycationic character. [ 2 ]
The linear PEI is a semi-crystalline solid at room temperature while branched PEI is a fully amorphous polymer existing as a liquid at all molecular weights. Linear polyethyleneimine is soluble in hot water, at low pH, in methanol , ethanol , or chloroform . It is insoluble in cold water, benzene , ethyl ether , and acetone . Linear polyethylenimine has a melting point of around 67 °C. [ 3 ] Both linear and branched polyethylneimine can be stored at room temperature. Linear polyethylenimine is able to form cryogels upon freezing and subsequent thawing of its aqueous solutions. [ 3 ]
Branched PEI can be synthesized by the ring opening polymerization of aziridine . [ 4 ] Depending on the reaction conditions different degree of branching can be achieved. Linear PEI is available by post-modification of other polymers like poly(2-oxazolines) [ 5 ] or N -substituted polyaziridines. [ 6 ] Linear PEI was synthesised by the hydrolysis of poly(2-ethyl-2-oxazoline) [ 7 ] and sold as jetPEI. [ 8 ] The current generation in-vivo-jetPEI uses bespoke poly(2-ethyl-2-oxazoline) polymers as precursors. [ 9 ]
Polyethyleneimine finds many applications in products like: detergents, adhesives, water treatment agents and cosmetics. [ 10 ] Owing to its ability to modify the surface of cellulose fibres, PEI is employed as a wet-strength agent in the paper-making process . [ 11 ] It is also used as flocculating agent with silica sols and as a chelating agent with the ability to complex metal ions such as zinc and zirconium. [ 12 ] There are also other highly specialized PEI applications:
PEI has a number of uses in laboratory biology, especially tissue culture , but is also toxic to cells if used in excess. [ 13 ] [ 14 ] Toxicity is by two different mechanisms, [ 15 ] the disruption of the cell membrane leading to necrotic cell death (immediate) and disruption of the mitochondrial membrane after internalisation leading to apoptosis (delayed).
Polyethyleneimines are used in the cell culture of weakly anchoring cells to increase attachment. PEI is a cationic polymer; the negatively charged outer surfaces of cells are attracted to dishes coated in PEI, facilitating stronger attachments between the cells and the plate.
Poly(ethylenimine) was the second polymeric transfection agent discovered, [ 16 ] after poly-L-lysine. PEI condenses DNA into positively charged particles, which bind to anionic cell surface residues and are brought into the cell via endocytosis . Once inside the cell, protonation of the amines results in an influx of counter-ions and a lowering of the osmotic potential. Osmotic swelling results and bursts the vesicle releasing the polymer-DNA complex (polyplex) into the cytoplasm. If the polyplex unpacks then the DNA is free to diffuse to the nucleus. [ 17 ] [ 18 ]
Poly(ethylenimine) is also an effective permeabilizer of the outer membrane of Gram-negative bacteria . [ 19 ]
Both linear and branched polyethylenimine have been used for CO 2 capture, frequently impregnated over porous materials. First use of PEI polymer in CO 2 capture was devoted to improve the CO 2 removal in space craft applications, impregnated over a polymeric matrix. [ 20 ] After that, the support was changed to MCM-41, an hexagonal mesostructured silica, and large amounts of PEI were retained in the so-called "molecular basket". [ 21 ] MCM-41-PEI adsorbent materials led to higher CO 2 adsorption capacities than bulk PEI or MCM-41 material individually considered. The authors claim that, in this case, a synergic effect takes place due to the high PEI dispersion inside the pore structure of the material. As a result of this improvement, further works were developed to study more in depth the behaviour of these materials. Exhaustive works have been focused on the CO 2 adsorption capacity as well as the CO 2 /O 2 and CO 2 /N 2 adsorption selectivity of several MCM-41-PEI materials with PEI polymers. [ 22 ] [ 23 ] Also, PEI impregnation has been tested over different supports such as a glass fiber matrix [ 24 ] and monoliths. [ 25 ] However, for an appropriate performance under real conditions in post-combustion capture (mild temperatures between 45-75 °C and the presence of moisture) it is necessary to use thermally and hydrothermally stable silica materials, such as SBA-15 , [ 26 ] which also presents an hexagonal mesostructure. Moisture and real world conditions have also been tested when using PEI-impregnated materials to adsorb CO 2 from the air. [ 27 ]
A detailed comparison among PEI and other amino-containing molecules showed an excellent performance of PEI-containing samples with cycles. Also, only a slight decrease was registered in their CO 2 uptake when increasing the temperature from 25 to 100 °C, demonstrating a high contribution of chemisorption to the adsorption capacity of these solids. For the same reason, the adsorption capacity under diluted CO 2 was up to 90% of the value under pure CO 2 and also, a high unwanted selectivity towards SO 2 was observed. [ 28 ] Lately, many efforts have been made in order to improve PEI diffusion within the porous structure of the support used. A better dispersion of PEI and a higher CO 2 efficiency (CO 2 /NH molar ratio) were achieved by impregnating a template-occluded PE-MCM-41 material rather than perfect cylindrical pores of a calcined material, [ 29 ] following a previously described route. [ 30 ] The combined use of organosilanes such as aminopropyl-trimethoxysilane, AP, and PEI has also been studied. The first approach used a combination of them to impregnate porous supports, achieving faster CO 2 -adsorption kinetics and higher stability during reutilization cycles, but no higher efficiencies. [ 31 ] A novel method is the so-called "double-functionalization". It is based on the impregnation of materials previously functionalized by grafting (covalent bonding of organosilanes). Amino groups incorporated by both paths have shown synergic effects, achieving high CO 2 uptakes up to 235 mg CO 2 /g (5.34 mmol CO 2 /g). [ 32 ] CO 2 adsorption kinetics were also studied for these materials, showing similar adsorption rates as impregnated solids. [ 33 ] This is an interesting finding, taking into account the smaller pore volume available in double-functionalized materials. Thus, it can be also concluded that their higher CO 2 uptake and efficiency compared to impregnated solids can be ascribed to a synergic effect of the amino groups incorporated by two methods (grafting and impregnation) rather than to a faster adsorption kinetics.
Poly(ethylenimine) and poly(ethylenimine) ethoxylated (PEIE) have been shown as effective low-work function modifiers for organic electronics by Zhou and Kippelen et al. [ 34 ] It could universally reduce the work function of metals, metal oxides, conducting polymers and graphene, and so on. It is very important that low-work function solution-processed conducting polymer could be produced by the PEI or PEIE modification. Based on this discovery, the polymers have been widely used for organic solar cells, organic light-emitting diodes, organic field-effect transistors, perovskite solar cells, perovskite light-emitting diodes, quantum-dot solar cells and light-emitting diodes etc.
Polyethylenimine (PEI), a cationic polymer, has been widely studied and shown great promise as an efficient gene delivery vehicle. Likewise, the HIV-1 Tat peptide, a cell-permeable peptide, has been successfully used for intracellular gene delivery. [ 35 ] | https://en.wikipedia.org/wiki/Polyethylenimine |
Polyferrocenes are polymers containing ferrocene units. Ferrocene offers many advantages over pure hydrocarbons when used as a building block of macromolecular chemistry . The variety of possible substitutions at the ferrocene parent body results in a multitude of accessible polymers with interesting electronic and photonic properties. Many polyferrocenes are relatively easily accessible. Poly(1,1'-ferrocene-silane) can be prepared by ring-opening polymerization and has a variety of interesting properties, such as a high refractive index or semiconductor properties . Ring-opening polymerization usually leads to polymers containing ferrocene in the backbone. Besides the latter motif, ferrocene can be attached to the backbone as pendant unit as well. [ 1 ]
Polyferrocenes currently have no commercial applications, despite being the subject of research for nearly 50 years. Poly vinylferrocene gives electroactive films that have been investigated as glucose sensors. [ 2 ]
Satellites charge themselves by the bombardment with charged particles of the solar wind . Charging can lead to an arc discharge which can impair the function of the satellite due to magnetic disturbances and material failure. To avoid these impairments, coatings of electrically weak or non-conductive plastic components of thin films of poly(1,1'-ferrocen-silane) were examined. These carry off the charge generated by the irradiation and thus could protect the satellite from overloads. [ 3 ]
Polyferroccenes have attracted interest as high-refractive-index polymers , such as in antireflection coatings or for light-emitting diodes . [ 4 ] Poly(1,1'-ferrocene-silane)e, poly(1,1'-ferrocene-phosphane) and polyferrocenes with phenyl side chains are polymers with unusually high refractive index, with values in the refractive index of up to 1.74. These polyferrocenes show good film-forming ability. [ 5 ]
Poly(ferrocene-dimethylsilane)s (PFS) are promising as barrier materials in plasma-assisted reactive ion etching . Due to the presence of iron and silicon in the main chain, the polymer proved to be relatively stable compared to purely organic polymers. During the etching, a thin iron and silicon-containing oxide layer was formed on the surface of the poly(ferrocene-dimethylsilane). [ 6 ] | https://en.wikipedia.org/wiki/Polyferrocenes |
Polyfluorene is a polymer with formula (C 13 H 8 ) n , consisting of fluorene units linked in a linear chain — specifically, at carbon atoms 2 and 7 in the standard fluorene numbering. It can also be described as a chain of benzene rings linked in para positions (a polyparaphenylene ) with an extra methylene bridge connecting every pair of rings.
The two benzene rings in each unit make polyfluorene an aromatic hydrocarbon, specifically conjugated polymer , and give it notable optical and electrical properties , such as efficient photoluminescence .
When spoken about as a class, polyfluorenes are derivatives of this polymer, obtained by replacing some of the hydrogen atoms by other chemical groups , and/or by substituting other monomers for some fluorene units. These polymers are being investigated for possible use in light-emitting diodes , field-effect transistors , plastic solar cells , and other organic electronic applications. They stand out among other luminescent conjugated polymers because the wavelength of their light output can be tuned through the entire visible spectrum by appropriate choice of the substituents .
Fluorene, the repeat unit in polyfluorene derivatives, was isolated from coal tar and discovered by Marcellin Berthelot prior to 1883. [ 1 ] [ 2 ] [ 3 ] Its name originates from its interesting fluorescence (and not to fluorine , which is not one of its elements).
Fluorene became the subject of chemical-structure related color variation (visible rather than luminescent), among other things, throughout the early to mid-20th century. Since it was an interesting chromophore researchers wanted to understand which parts of the molecule were chemically reactive , and how substituting these sites influenced the color. For instance, by adding various electron donating or electron accepting moieties to fluorene, and by reacting with bases , researchers were able to change the color of the molecule. [ 1 ] [ 4 ] [ 5 ]
The physical properties of the fluorene molecule were recognizably desirable for polymers; as early as the 1970s researchers began incorporating this moiety into polymers. For instance, because of fluorene’s rigid, planar shape a polymer containing fluorene was shown to exhibit enhanced thermo-mechanical stability. [ 6 ] However, more promising was integrating the optoelectronic properties of fluorene into a polymer. Reports of the oxidative polymerization of fluorene (into a fully conjugated form) exist from at least 1972. [ 7 ] However, it was not until after the highly publicized high conductivity of doped polyacetylene, presented in 1977 by Heeger, MacDiarmid and Shirakawa , that substantial interest in the electronic properties of conjugated polymers took off.
As interest in conducting plastics grew, fluorene again found application. The aromatic nature of fluorene makes it an excellent candidate component of a conducting polymer because it can stabilize and conduct a charge; in the early 1980s fluorene was electropolymerized into conjugated polymer films with conductivities of 10 −4 S cm −1 . [ 8 ] [ 9 ] The optical properties (such as variable luminescence and visible light spectrum absorption) that accompany the extended conjugation in polymers of fluorene have become increasingly attractive for device applications. Throughout the 1990s and into the 2000s, many devices such as organic light-emitting diodes (OLEDs), [ 10 ] organic solar cells ., [ 11 ] organic thin film transistors , [ 12 ] and biosensors [ 13 ] [ 14 ] have all taken advantage of the luminescent, electronic and absorptive properties of polyfluorenes.
Polyfluorenes are an important class of polymers which have the potential to act as both electroactive and photoactive materials. This in part due to the shape of fluorene. Fluorene is generally planar; [ 15 ] [ 16 ] p-orbital overlap at the linkage between its two benzene rings results in conjugation across the molecule. This in turn allows for a reduced band gap as the excited state molecular orbitals are delocalized . [ 17 ] Since the degree of delocalization and the spatial location of the orbitals on the molecule is influenced by the electron donating (or withdrawing) character of its substituents, the band gap energy can be varied. This chemical control over the band gap directly influences the color of the molecule by limiting the energies of light which it absorbs. [ 18 ]
Interest in polyfluorene derivatives has increased because of their high photoluminescence quantum efficiency, high thermal stability, and their facile color tunability, obtained by introducing low-band-gap co-monomers. Research in this field has increased significantly due to its potential application in tuning organic light-emitting diodes (OLEDs). In OLEDs, polyfluorenes are desirable because they are the only family of conjugated polymers that can emit colors spanning the entire visible range with high efficiency and low operating voltage. Furthermore, polyfluorenes are relatively soluble in most solvents , making them ideal for general applications. [ 19 ]
Another important quality of polyfluorenes is their thermotropic liquid crystallinity which allows the polymers to align on rubbed polyimide layers. Thermotropic liquid crystallinity refers to the polymers' ability to exhibit a phase transition into the liquid crystal phase as the temperature is changed. This is very important to the development of liquid crystal displays (LCDs) because the synthesis of liquid crystal displays requires that the liquid-crystal molecules at the two glass surfaces of the cell be aligned parallel to the two polarizer foils. [ 20 ] This can only be done by coating the inner-surfaces of the cell with a thin, transparent film of polyamide which is then rubbed with a velvet cloth. Microscopic grooves are then generated in the polyamide layer and the liquid crystal in contact with the polyamide, the polyfluorene, can align in the rubbing direction. In addition to LCDs, polyfluorene can also be used to synthesize light-emitting diodes (LEDs). Polyfluorene has led to LEDs that can emit polarized light with polarization ratios of more than 20 and with brightness of 100 cd m −2 . Even though this is very impressive [ according to whom? ] , it is not sufficient for general applications. [ 21 ]
Polyfluorenes often show both excimer and aggregate formation upon thermal annealing or when current is passed through them. Excimer formation involves the generation of dimerized units of the polymer which emit light at lower energies than the polymer itself. This hinders the use of polyfluorenes for most applications, including light-emitting diodes (LED). When excimer or aggregate formation occurs this lowers the efficiency of the LEDs by decreasing the efficiency of charge carrier recombination. Excimer formation also causes a red shift in the emission spectrum . [ 22 ]
Polyfluorenes can also undergo decomposition. There are two known ways in which decomposition can occur. The first involves the oxidation of the polymer that leads to the formation of an aromatic ketone, quenching the fluorescence. The second decomposition process results in aggregation leading to a red-shifted fluorescence, reduced intensity, exciton migration and relaxation through excimers. [ 23 ]
Researchers have attempted to eliminate excimer formation and enhance the efficiency of polyfluorenes by copolymerizing polyfluorene with anthracene and end-capping polyfluorenes with bulky groups which could sterically hinder excimer formation. Additionally, researchers have tried adding large substituents at the nine position of the fluorene in order to inhibit excimer and aggregate formation. Furthermore, researchers have tried to improve LEDs by synthesizing fluorene-triarylamine copolymers and other multilayer devices that are based on polyfluorenes that can be cross-linked. These have been found to have brighter fluorescence and reasonable efficiencies. [ 24 ]
Aggregation has also been combated by varying the chemical structure. For example, when conjugated polymers aggregate, which is natural in the solid state, their emission can be self-quenched, reducing luminescent quantum yields and reducing luminescent device performance. In opposition to this tendency, researchers have used tri-functional monomers to create highly branched polyfluorenes which do not aggregate due to the bulkiness of the substituents. This design strategy has achieved luminescent quantum yields of 42% in the solid state. [ 25 ] This solution reduces the ease of processability of the material because branched polymers have increased chain entanglement and poor solubility.
Another problem commonly encountered by polyfluorenes is an observed broad green, parasitic emission which detracts from the color purity and efficiency needed for an OLED. [ 18 ] [ 19 ] [ 26 ] Initially attributed to excimer emission, this green emission has been shown to be due to the formation of ketone defects along the fluorene polymer backbone (oxidation of the nine position on the monomer) when there are incomplete substitution at the nine positions of the fluorene monomer. [ 18 ] Routes to combat this involve ensuring full substitution of the monomer’s active site, or including aromatic substituents. [ 18 ] These solutions may present structures that lack optimal bulkiness or may be synthetically difficult.
Conjugated polymers, such as polyfluorene, can be designed and synthesized with different properties for a wide variety of applications. [ 19 ] The color of the molecules can be designed through synthetic control over the electron donating or withdrawing character of the substituents on fluorene or the comonomers in polyfluorene. [ 20 ] [ 27 ] [ 28 ]
Solubility of the polymers are important because solution state processing is very common. Since conjugated polymers, with their planar structure, tend to aggregate, bulky side chains are added (to the 9 position of fluorene) to increase the solubility of the polymer.
The earliest polymerizations of fluorene were oxidative polymerization with AlCl 3 [ 7 ] or FeCl 3 , [ 29 ] [ 30 ] and more commonly electropolymerization. [ 8 ] [ 9 ] Electropolymerization is an easy route to obtain thin, insoluble conducting polymer films. However, this technique has a few disadvantages in that it does not provide controlled chain growth polymerizations, and processing and characterization are difficult as a result of its insolubility. Oxidative polymerization produces a similarly poor site-selectivity on the monomer for chain growth resulting in poor control over the regularity of the polymers structure. However, oxidative polymerization does produce soluble polymers (from side-chain containing monomers) which are more easily characterized with nuclear magnetic resonance .
The design of polymeric properties requires great control over the structure of the polymer. For instance, low band gap polymers require regularly alternating electron donating and electron accepting monomers. [ 11 ] [ 18 ] More recently, many popular cross-coupling chemistries have been applied to polyfluorenes and have enabled controlled polymerization; Palladium-catalyzed coupling reactions such as Suzuki coupling , [ 25 ] [ 28 ] [ 31 ] [ 32 ] Heck coupling , [ 33 ] etc., as well as nickel catalyzed [ 20 ] Yamamoto [ 10 ] [ 27 ] and Grignard [ 34 ] coupling reactions have been applied to polymerization of fluorene derivatives. Such routes have enabled excellent control over the properties of polyfluorenes; the fluorene-thiophene-benzothiadiazole copolymer shown above, with a band gap of 1.78 eV when the side chains are alkoxy , [ 11 ] appears blue because it is absorbing in the red wavelengths.
Modern coupling chemistries allow other properties of polyfluorenes to be controlled through implementation of complex molecular designs.
The above polymer structure pictured has excellent photoluminescent quantum yields (partly due to its fluorene monomer) excellent stability (due to its oxadiazole comonomer) good solubility (due to its many and branched alkyl side chains) and has an amine functionalized side chain for ease of tethering to other molecules or to a substrate. [ 13 ] The luminescent color of polyfluorenes can be changed, for example, (from blue to green-yellow) by adding functional groups which participate in excited state intramolecular proton transfer. Exchanging the alkoxy side chains for alcohol side groups allows for energy dissipation (and a red-shift in emission) through reversible transfer of a proton from the alcohol to the nitrogen (on the oxadiazole). These complicated molecular structures were engineered to have these properties and were only able to be realized through careful control of their ordering and side group functionality.
In recent years many industrial efforts have focused on tuning the color of lights using polyfluorenes. It was found that by doping green or red emitting materials into polyfluorenes one could tune the color emitted by the polymers. Since polyfluorene homopolymers emit higher energy blue light, they can transfer energy via Förster resonance energy transfer (FRET) to lower energy emitters. In addition to doping, color of polyfluorenes can be tuned by copolymerizing the fluorene monomers with other low band gap monomers. Researchers at the Dow Chemical Company synthesized several fluorene-based copolymers by alternating copolymerization using 5,5-dibromo-2,2-bithiophene which showed yellow emission and 4,7-dibromo-2,1,3-benzothiadiazole, which showed green emission. Other copolymerizations are also suitable; researchers at IBM performed random copolymerization of fluorene with 3,9(10)-dibromoperylene,4,4-dibromo-R-cyanostilbene, and 1,4-bis(2-(4-bromophenyl)-1-cyanovinyl)-2-(2-ethylhexyl)-5-methoxybenzene. Only a small amount of the co-monomer, approximately 5%, was needed to tune the emission of the polyfluorene from blue to yellow. This example further illustrates that by introducing monomers that have a lower band gap than the fluorene monomer, one can tune the color that is emitted by the polymer. [ 20 ]
Substitution at the nine position with various moieties has also been examined as a means to control the color emitted by polyfluorene. In the past researchers have tried putting alkyl substituents on the ninth position, however it has been found that by putting bulkier groups, such as alkoxyphenyl groups, the polymers had enhanced blue emission stability and superior polymer light-emitting diode performance (compared to polymers which have alkyl substituents at the ninth position). [ 21 ]
Polyfluorenes are also used in polymer solar cells because of their affinity for property tuning. Copolymerization of fluorene with other monomers allows researchers to optimize the absorption and electronic energy levels as a means to increase the photovoltaic performance. For instance, by lowering the band gap of polyfluorenes, the absorption spectrum of the polymer can be adjusted to coincide with the maximum photon flux region of the solar spectrum . [ 11 ] [ 36 ] This helps the solar cell absorb more of the sun's energy and to increase its energy conversion efficiency ; donor-acceptor structured copolymers of fluorene have achieved efficiencies above 4% when their absorption edge was pushed to 700 nm. [ 37 ]
The voltage of polymer solar cells has also been increased through the design of polyfluorenes. These devices are typically produced by blending electron accepting and electron donating molecules which help separate charge to produce power. In polymer blend solar cells, the voltage produced by the device is determined by the difference between the electron donating polymer’s highest occupied molecular orbital (HOMO) energy level and the electron accepting molecules lowest unoccupied molecular orbital (LUMO) energy level. By adding electron withdrawing pendant molecules to conjugated polymers, their HOMO energy level can be lowered. [ 36 ] For instance by adding electronegative groups on the end of conjugated side chains, researchers lowered the HOMO of a polyfluorene copolymer to −5.30 eV and increased the voltage of a solar cell to 0.99 V. [ 36 ] [ 37 ] [ 38 ]
Typical polymer solar cells utilize fullerene molecules as electron acceptors because of their low LUMO energy level (high electron affinity ). However the tunability of polyfluorenes allows their LUMO to be lowered to a level appropriate for use as an electron acceptor. Thus, polyfluorene copolymers have also been used in polymer:polymer blend solar cells, where their electron accepting, electron conducting and light absorbing properties permit device performance. [ 39 ] [ 40 ] | https://en.wikipedia.org/wiki/Polyfluorene |
Polyfullerene is a basic polymer of the C 60 monomer group, in which fullerene segments are connected via covalent bonds into a polymeric chain without side or bridging groups. They are called intrinsic polymeric fullerenes , or more often all C 60 polymers .
Fullerene can be part of a polymer chain in many different ways. Fullerene-containing polymers are divided into following structural categories:
Fullerene is a relatively new substance in chemistry sciences. Buckminsterfullerene itself was discovered in 1985 [ 1 ] and the first fullerene-containing polymers were reported at least 6 [ 2 ] years later.
The main milestones in the use of fullerene in polymer chemistry are listed below:
High content of double bonds in the fullerene molecule (30 double bonds in Buckminsterfullerene) leads to crosslinking and formation of regioisomers . Polymerization without any sophisticated control of forming structure leads to very high randomization of polymer grid. Thus, linking units of second monomer are needed to prepare linear copolymers (see main-chain polymers).
This group includes heteroatomic C 60 polymers containing non-carbon atoms in polyfullerene chains. [ 8 ]
This section describes most of the main structural types of fullerene-containing polymers.
Polyfullerenes can be prepared via many polymerization mechanisms. Research is mainly focused on photopolymerization , [ 9 ] polymerization under high pressure [ 10 ] and charge-transfer polymerization. [ 11 ]
The most likely connection of fullerene units is [2+2] cycloaddition of two double bonds of the benzene parts of fullerene molecules. Cycloaddition provides a cyclobutane ring connecting two fullerene molecules. [ 12 ] [ 13 ]
Main-chain polymers are characterized by the presence of fullerene units in the polymer backbone. They are not heteroatomic fullerene homopolymers but linear fullerene copolymers.
The structure can be described as necklace-type . One approach to achieving fullerene main-chain polymers is by copolymerizing fullerene with a difunctional monomer. Second option is polycondensation of bifunctionalized fullerene with monomer bearing compatible functional groups.
Fullerene copolymers can be obtained through standard polymerization techniques used for industrially standard polymers. Examples of first approach are Diels-Alder addition and free radical copolymerization. Fullerene can be copolymerized with methylmethacrylate by initiation with azobisisobutyronitrile (AIBN). [ 14 ]
In Diels-Alder copolymerization fullerene acts as a dienophile with diene to form a cyclohexene ring. The figure below shows Diels-Alder reaction with the simplest diene – buta-1,3-diene. Comonomer must contain two pairs of conjugated double bonds in order to react with two fullerene molecules obtaining linear polymeric chain molecules. Used monomers are usually bulkier than conventional monomers in order to compensate the space requirements of fullerene spheres.
Most fullerene polymers fall into this category. [ 8 ] Similarly to the previous polymer type, two synthetic approaches are available. First, bonding fullerene spheres onto a polymerized chain or second, polymerizing monomer unit already bearing fullerene.
An example of the second approach is ring-opening metathesis polymerisation (ROMP) of norbornene bearing C 60 or copolymerization of pure norbornene and C 60 functionalized norbornene [ 15 ] .
As mentioned earlier, Buckminsterfullerene is capable of multiple additions and basic polymerization conditions lead to a polymer grid. Fullerene behaves the same way in copolymerization. In free radical copolymerization of styrene and C 60 fullerene, the resulting copolymer is cross-linked and heterogeneous. [ 16 ]
Easy preparation of cross-linked fullerene polymer is copolymerization with polyurethanes. In this technique, fullerenol bearing up to 44 hydroxyl groups C 60 (OH) 4 – 44 [ 17 ] and di- or tri- isocyanate prepolymers are used as initial substances. Successful syntheses were conducted in a mixture of dimethylformamide (DMF) and tetrahydrofuran (THF)(1:3) at 60°C. [ 18 ]
Fullerene End-caped polymers
Also incorrectly named “telechelic” polymers, but telechelic polymers have reactive functional end-groups. They can be synthesized by incorporating fullerenes onto the ends of polymerized chains or growth of a polymeric chain from a functionalized fullerene derivative and additionally closure. Introducing fullerene spheres into the end of the macromolecule significantly increases hydrophobicity of the original polymer.
Star fullerene polymers can be prepared by two major approaches.
Reported star fullerene polymers were prepared by anionic copolymerization with polystyrene to form C 60 (CH 2 CH(C 6 H 5 )) x ) n , where n stands for the number of polystyrene star “arms” from 2 to 6. [ 19 ] [ 20 ] Second approach is growing polymer chains directly from fullerene derivative C 60 Cl n ( n = 16–20) by atom transfer radical polymerization. The chlorine fullerene derivative virtually works as an ATRP initiator . Countless polymers can be used for star arms.
Polyphenylakyne polymers can be used as an example since they give photoemitting macromolecules when grafted onto fullerene. C 60 -poly(1-phenyl-1-propyne) can be prepared via wolfram-catalyzed metathesis reaction connecting prepared poly(1-phenyl-1-propyne) onto the fullerene by carbene addition resulting in cyclopropane connecting ring. [ 21 ] Fullerene acts as a cocatalyst since tungsten catalyst (WCl 6 -Ph 4 Sn) is not able to polymerize 1-phenyl-1-propyne itself.
Polyfullerenes are currently in an early research phase and real-world applications or even industrial production solutions are yet to be found. The main reasons for this are the novelty of combining fullerene chemistry with polymer chemistry and the fact that fullerene can be currently synthesized on a scale of a few grams. All-C 60 polymers exhibit practically no solubility, thus preventing proper testing of processability and chemical properties.
Upcoming text only refers to potential applications of fullerene polymers according to founded properties of particular macromolecules.
Fullerene itself stands out in the class of organic compounds because of its electronic properties. Current research studies utilization of fullerene by bonding it onto an optimal polymeric substrate. Practical reasons are easy processability of polymers and low price in comparison to pure C 60 fullerene.
Polymer backbones bearing fullerene spheres exhibit good or great photoconductivity and even generate photocurrent when exposed to white light. [ 22 ] [ 23 ]
C 60 - polyvinylcarbazole (C 60 –PVK) exhibits photoinduced electron transfer within the polymer, which could be used for digital rewritable memory electronic parts. Prototype of such part of indium tin oxide , fullerene polymer and aluminum (ITO/ C 60 –PVK /Al) was capable to read, write and erase information for about 100 million times. [ 24 ]
Polyvinylcabazole polymer grown from fullerene polychloride (C 60 Cl n ) was observed to increase the intensity of radiated light of an electroluminescent device. This star polymer with three arms is acting as a hole-transporting layer for semiconductor parts of a device. [ 25 ]
On the other hand, hole-trapping materials affect electroluminescence the same way. Double-cable polymers are also candidates for functional layers for OLED displays. Adding 1 wt. % into basic OLED material increased luminescence of the diode. [ 26 ] Very promising hole-trapping materials are polyacetylene backbone polymers with fullerene in combination with different electron-accepting groups in branches. [ 27 ]
Star copolymer (PS) x C 60 (PMMA) y ( polystyrene and polymethylmethacrylate being different star “arms”) acted as an active electroluminescence layer. It improved emitting of a semiconductor electroluminescence device by up to 20 times. [ 28 ] C 60 -poly(1-phenyl-1-propyne) is also reported to exhibit light emission. [ 29 ] Fullerene moiety increased emission of blue light two times in comparison to pure poly(phenyl propyne). Stability and processability of such polymer is very good.
Fullerene polymers are widely studied in organic solar cells for active layers of new-generation photovoltaic panels . Examples are homopolymers of C 60 -polystyrene [ 30 ] and C 60 -polyethyleneglycols [ 31 ] or C 60 copolymers prepared by ROMP polymerization. [ 32 ] [ 33 ] The current efficiency of converting incoming sun radiation to electricity is about 3%. [ 32 ]
Another polymer type with intrinsic properties are “Double-Cable” polymers. They are brush-like structures consisting of 𝜋-electron conjugated backbone (P-type part) bearing electron-accepting branches (N-type part). [ 34 ] [ 35 ]
Particular fullerene (co)polymers exhibit an optical limiting property, meaning they block intense light flux passing through them. Low intensity light flux is not affected. It is useful for light control parts in optics and as sensor or eye protection. [ 36 ] [ 37 ] [ 38 ] [ 39 ]
Currently, fullerene copolymerized with palladium showed some practical aspects, particularly (C 60 Pd 3 ) n due to the content of palladium on its surface, exhibits catalytic effect for hydrogenation of alkenes [ 40 ] and can lead to the development of new catalytic systems and products.
(C 60 Pd) n polymers can adsorb gases, making them useful as adsorbents for volatile and toxic species. For example, a great affinity to toluene was proved. [ 41 ] The palladium atoms in the backbone are partially positive and thus attract 𝜋-electrons of aromatic core of toluene.
Introducing correct amount of fullerene as side groups onto poly(2,6-dimethyl-1,4-phenylene oxide) ( PPO ) increases permeability of gas separation membranes by 80 % in comparison with pure PPO. Bulky fullerene probably increases the free volume of PPO. [ 42 ]
Materials originating from polyurethane synthesis exhibit improved thermal mechanical stability. [ 43 ] Fullerene-containing polyurethanes also exhibit strong optical response and are potentially applicable for optical signal processing. [ 38 ]
Linear polymer chains containing fullerene undergo crosslinking. Resulting material exhibits elastomeric behavior with 10 times higher tensile strength and 17 times higher elongation at break than the same material without fullerene. [ 8 ]
Blending of fullerene end-capped polymers ( polyethylene glycols for example) with H-donating polymers ( polyvinylchloride , poly( p -vinyl phenol) , polymethylmethacrylate , etc.) leads to the enhancement of mechanical properties of H-donating polymers.
Fullerene end-caped poly(N-isopropylacrylamide) is a water-soluble polymer with the tendency to form clusters. [ 44 ] It is a very good scavenger of free radicals , and it can be used for controlling radical polymerizations .
Fullerene polymers are potential candidates for establishing polymer circular economy .
Depolymerizeable polymers are the hope of polymer recycling . C 60 fullerene copolymerized with [4,4′-bithiazole]-2,2′-bis(diazonium)chloride (see Magnetic behavior) was observed to depolymerize in a temperature range of 60-75°C. Polymerization and depolymerization can be done several times before degradation of initial components. [ 45 ] The depolymerization temperature and the difference between polymerization and depolymerization temperatures must be increased.
Basic fullerene polymers without polar functional groups are strongly hydrophobic, thus incompatible for medicinal use in the human body.
An example of water-soluble derivatives are polyfullerocyclodextrins. They are prepared by reaction of 𝛽-cyclodextrin complexes with fullerene. They exhibit excellent DNA -cleaving activity [ 46 ] (in the presence of visible light, they cleave DNA quantitatively). This phenomenon can be used for eliminating cancer cells.
The introduction of hydrophilic groups into the macromolecule is the principle of preparing water-soluble polymers. Examples of backbones for water-soluble fullerene side-chain polymers are for example poly( maleic anhydride - co - vinyl acetate ) (52) or pullulan . [ 47 ]
Polymers with C 60 -backbone with ferromagnetic properties were reported in literature, [ 48 ] although fullerene itself is antiferromagnetic . An example of a successful synthesis of ferromagnetic C 60 –polymer uses [4,4′-bithiazole]-2,2′-bis(diazonium)dichloride, C 60 and FeSO 4 . | https://en.wikipedia.org/wiki/Polyfullerene |
Polygenic adaptation describes a process in which a population adapts through small changes in allele frequencies at hundreds or thousands of loci . [ 1 ]
Many traits in humans and other species are highly polygenic , i.e., affected by standing genetic variation at hundreds or thousands of loci. Under normal conditions, the genetic variation underlying such traits is governed by stabilizing selection , in which natural selection acts to hold the population close to an optimal phenotype . However, if the phenotypic optimum changes, then the population can adapt by small directional shifts in allele frequencies spread across all the variants that affect the trait . Polygenic adaptation can occur relatively quickly (as described in the breeder's equation ), however it is difficult to detect from genomic data because the changes in allele frequencies at individual loci are very small.
Polygenic adaptation represents an alternative to adaptation by selective sweeps . In classic selective sweep models, a single new mutation sweeps through a population to fixation , purging variation from a region of linkage around the selected site. [ 2 ] More recent models have focused on partial sweeps, and on soft sweeps [ 3 ] - i.e., sweeps that start from standing variation or comprise multiple sweeping variants at the same locus. All of these models focus on adaptation through genetic changes at a single locus and they generally assume large changes in allele frequencies.
The concept of polygenic adaptation is related to classical models from quantitative genetics . However, traditional models in quantitative genetics usually abstract away the contributions of individual loci by focusing instead on means and variances of genetic scores. In contrast, population genetics models and data analysis have generally emphasized models of adaptation through sweeps at individual loci. The modern formulation of polygenic adaptation in population genetics was developed in a pair of 2010 review articles. [ 1 ] [ 4 ]
Polygenic adaptation is presumed to be the dominant mode of adaptation in artificial selection , when plants or animals undergo rapid responses to selective pressures. However, in most cases the actual genetic loci involved are not yet known (but see e.g., [ 5 ] ).
At present the best-understood examples of polygenic adaptation are in humans, and particularly for height, a trait that can be interpreted using data from genome-wide association studies . In a 2012 paper, Joel Hirschhorn and colleagues showed that there was a consistent tendency for the "tall" alleles at genome-wide significant loci to be at higher frequencies in northern Europeans than in southern Europeans. [ 6 ] They interpreted this observation to indicate that the difference in average height between northern and southern Europeans is at least partly genetic (as opposed to environmental) and that it was driven by selection. This result has been replicated by subsequent studies, [ 7 ] [ 8 ] [ 9 ] [ 10 ] however the environmental factor driving the selection remains unclear. A study of recent polygenic adaptation in the English has shown that selection on height has had small effects on allele frequencies (<1%) across most of the genome, and found evidence for polygenic adaptation in a wide variety of other traits as well including selection for increased infant birth size and increased female hip and waist size. [ 10 ] | https://en.wikipedia.org/wiki/Polygenic_adaptation |
Polygons are used in computer graphics to compose images that are three-dimensional in appearance, [ 1 ] and are one of the most popular geometric building blocks in computer graphics. [ 2 ] Polygons are built up of vertices , and are typically used as triangles.
A model 's polygons can be rendered and seen simply in a wire frame model , where the outlines of the polygons are seen, as opposed to having them be shaded. This is the reason for a polygon stage in computer animation . The polygon count refers to the number of polygons being rendered per frame .
Beginning with the fifth generation of video game consoles , the use of polygons became more common, and with each succeeding generation, polygonal models became increasingly complex.
This computing article is a stub . You can help Wikipedia by expanding it .
This computer graphics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Polygon_(computer_graphics) |
A polygon soup is a set of unorganized polygons, typically triangles, before the application of any structuring operation, such as e.g. octree grouping. [ 1 ]
The term must not to be confused with the "PolySoup" [ 2 ] operation available in the 3D package Houdini , whose goal is to optimize the storage space needed by some piece of geometry through the reduction of the underlying number of polygon soups used in its representation. This is accomplished by removing redundant data points (e.g. vertices with the same position) without altering the topology or assigned properties of the optimized geometry in relation to the input one. As a result of this optimization, there can be savings in the storage and processing of large polygon meshes. These savings can have a bigger impact the larger the input data is. For instance, fluid simulations, particle simulations, rigid-body simulations, environments, and character models can reach into the millions of polygons for feature films, incurring in large storage and read/write costs. In those cases, reducing the number of polygon soups required to represent such data can lead to important savings in storage use and compute time. [ further explanation needed ]
This computer graphics –related article is a stub . You can help Wikipedia by expanding it .
This geometry-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Polygon_soup |
In chemistry the polyhedral skeletal electron pair theory (PSEPT) provides electron counting rules useful for predicting the structures of clusters such as borane and carborane clusters. The electron counting rules were originally formulated by Kenneth Wade , [ 1 ] and were further developed by others including Michael Mingos ; [ 2 ] they are sometimes known as Wade's rules or the Wade–Mingos rules . [ 3 ] The rules are based on a molecular orbital treatment of the bonding. [ 4 ] [ 5 ] [ 6 ] [ 7 ] These rules have been extended and unified in the form of the Jemmis mno rules . [ 8 ] [ 9 ]
Different rules (4 n , 5 n , or 6 n ) are invoked depending on the number of electrons per vertex.
The 4 n rules are reasonably accurate in predicting the structures of clusters having about 4 electrons per vertex, as is the case for many boranes and carboranes . For such clusters, the structures are based on deltahedra , which are polyhedra in which every face is triangular. The 4 n clusters are classified as closo- , nido- , arachno- or hypho- , based on whether they represent a complete ( closo- ) deltahedron , or a deltahedron that is missing one ( nido- ), two ( arachno- ) or three ( hypho- ) vertices.
However, hypho clusters are relatively uncommon due to the fact that the electron count is high enough to start to fill antibonding orbitals and destabilize the 4 n structure. If the electron count is close to 5 electrons per vertex, the structure often changes to one governed by the 5n rules, which are based on 3-connected polyhedra.
As the electron count increases further, the structures of clusters with 5n electron counts become unstable, so the 6 n rules can be implemented. The 6 n clusters have structures that are based on rings.
A molecular orbital treatment can be used to rationalize the bonding of cluster compounds of the 4 n , 5 n , and 6 n types.
The following polyhedra are closo polyhedra, and are the basis for the 4 n rules; each of these have triangular faces. [ 10 ] The number of vertices in the cluster determines what polyhedron the structure is based on.
Using the electron count, the predicted structure can be found. n is the number of vertices in the cluster. The 4 n rules are enumerated in the following table.
When counting electrons for each cluster, the number of valence electrons is enumerated. For each transition metal present, 10 electrons are subtracted from the total electron count. For example, in Rh 6 (CO) 16 the total number of electrons would be 6 × 9 + 16 × 2 − 6 × 10 = 86 – 60 = 26. Therefore, the cluster is a closo polyhedron because n = 6 , with 4 n + 2 = 26 .
Other rules may be considered when predicting the structure of clusters:
In general, closo structures with n vertices are n -vertex polyhedra.
To predict the structure of a nido cluster, the closo cluster with n + 1 vertices is used as a starting point; if the cluster is composed of small atoms a high connectivity vertex is removed, while if the cluster is composed of large atoms a low connectivity vertex is removed.
To predict the structure of an arachno cluster, the closo polyhedron with n + 2 vertices is used as the starting point, and the n + 1 vertex nido complex is generated by following the rule above; a second vertex adjacent to the first is removed if the cluster is composed of mostly small atoms, a second vertex not adjacent to the first is removed if the cluster is composed mostly of large atoms.
Example: Pb 2− 10
Example: S 2+ 4
Example: Os 6 (CO) 18
Example: [ 11 ] B 5 H 4− 5
The rules are useful in also predicting the structure of carboranes .
Example: C 2 B 7 H 13
The bookkeeping for deltahedral clusters is sometimes carried out by counting skeletal electrons instead of the total number of electrons. The skeletal orbital (electron pair) and skeletal electron counts for the four types of deltahedral clusters are:
The skeletal electron counts are determined by summing the total of the following number of electrons:
As discussed previously, the 4 n rule mainly deals with clusters with electron counts of 4 n + k , in which approximately 4 electrons are on each vertex. As more electrons are added per vertex, the number of the electrons per vertex approaches 5. Rather than adopting structures based on deltahedra, the 5n-type clusters have structures based on a different series of polyhedra known as the 3-connected polyhedra , in which each vertex is connected to 3 other vertices. The 3-connected polyhedra are the duals of the deltahedra. The common types of 3-connected polyhedra are listed below.
The 5 n rules are as follows.
Example: P 4
Example: P 4 S 3
Example: P 4 O 6
As more electrons are added to a 5 n cluster, the number of electrons per vertex approaches 6. Instead of adopting structures based on 4 n or 5 n rules, the clusters tend to have structures governed by the 6 n rules, which are based on rings. The rules for the 6 n structures are as follows.
Example: S 8
Hexane (C 6 H 14 )
Provided a vertex unit is isolobal with BH then it can, in principle at least, be substituted for a BH unit, even though BH and CH are not isoelectronic. The CH + unit is isolobal, hence the rules are applicable to carboranes. This can be explained due to a frontier orbital treatment. [ 10 ] Additionally there are isolobal transition-metal units. For example, Fe(CO) 3 provides 2 electrons. The derivation of this is briefly as follows:
Transition metal clusters use the d orbitals for bonding . Thus, they have up to nine bonding orbitals, instead of only the four present in boron and main group clusters. [ 12 ] [ 13 ] PSEPT also applies to metallaboranes
Owing their large radii, transition metals generally form clusters that are larger than main group elements. One consequence of their increased size, these clusters often contain atoms at their centers. A prominent example is [Fe 6 C(CO) 16 ] 2- . In such cases, the rules of electron counting assume that the interstitial atom contributes all valence electrons to cluster bonding. In this way, [Fe 6 C(CO) 16 ] 2- is equivalent to [Fe 6 (CO) 16 ] 6- or [Fe 6 (CO) 18 ] 2- . [ 14 ] | https://en.wikipedia.org/wiki/Polyhedral_skeletal_electron_pair_theory |
The polyhedral symbol is sometimes used in coordination chemistry to indicate the approximate geometry of the coordinating atoms around the central atom. One or more italicised letters indicate the geometry, e.g. TP -3 which is followed by a number that gives the coordination number of the central atom. [ 1 ] The polyhedral symbol can be used in naming of compounds, in which case it is followed by the configuration index . [ 1 ]
Source: [ 1 ]
The first step in determining the configuration index is to assign a priority number to each coordinating ligand according to the Cahn-Ingold-Prelog priority rules , (CIP rules). The preferred ligand takes the lowest priority number. For example, of the ligands acetonitrile , chloride ion, and pyridine the priority number assigned are chloride, 1; acetonitrile, 2; pyridene, 3. Each coordination type has a different procedure for specifying the configuration index and these are outlined in below.
The configuration index is a single digit which is defined as the priority number of the ligand on the stem of the "T".
The configuration index has two digits which are the priority numbers of the ligands separated by the largest angle. The lowest priority number of the pair is quoted first.
The configuration index is a single digit which is the priority number of the ligand trans to the highest priority ligand. (If there are two possibilities the principle of trans difference is applied). As an example, (acetonitrile)dichlorido(pyridine)platinum(II) complex where the Cl ligands may be trans or cis to one another. The ligand priority numbers are, applying the CIP rules:
In the trans case the configuration index is 1 giving the name( SP -4-1)-(acetonitrile)dichlorido(pyridine)platinum(II). In the cis case both of the organic ligands are trans to a chloride so to choose the trans difference is considered and the greater is between 1 and three therefore the name is ( SP -4-3)-(acetonitrile)dichlorido(pyridine)platinum(II).
The configuration index has two digits. The first digit is the priority number of the ligand trans to the highest priority ligand. This pair is then used to define the reference axis of the octahedron. The second digit is the priority number of the ligand trans to the highest priority ligand in the plane perpendicular to the reference axis.
The configuration index is a single digit which is the priority number of the ligand trans to the ligand of lowest priority in the plane perpendicular to the 4 fold axis. (If there is more than one choice then the highest numerical value second digit is taken.) NB this procedure gives the same result as SP -4, however in this case the polyhedral symbol specifies that the complex is non-planar.
There are two digits. The first digit is the priority number of the ligand on the fourfold (C 4 ) axis of the idealised pyramid the second digit is the priority number of the ligand trans to ligand of lowest priority in the plane perpendicular to the 4 fold axis. (If there is more than one choice then the highest numerical value second digit is taken.)
The configuration index consists of two digits which are the priority numbers of the ligands on the threefold rotation axis. The lowest numerical value is cited first.
The configuration index consists of two segments separated by a hyphen. The first segment consists of two digits which are the priority numbers of the ligands on the five, six or sevenfold rotation axis. The lowest numerical value is cited first.
The second segment consists of 5, 6 or 7 digits respectively. The lowest priority number is the first digit followed by the digits of the other atoms in the plane. The clockwise and anticlockwise sequences are compared and the one that yields the lowest numerical sequence is chosen. | https://en.wikipedia.org/wiki/Polyhedral_symbol |
Polyhedron is a peer-reviewed scientific journal covering the field of inorganic chemistry . It was established in 1955 as the Journal of Inorganic and Nuclear Chemistry and is published by Elsevier .
Polyhedron is abstracted and indexed in:
According to the Journal Citation Reports , the journal has a 2020 impact factor of 3.052. [ 1 ] | https://en.wikipedia.org/wiki/Polyhedron_(journal) |
A polyhedron model is a physical construction of a polyhedron , constructed from cardboard, plastic board, wood board or other panel material, or, less commonly, solid material.
Since there are 75 uniform polyhedra , including the five regular convex polyhedra , five polyhedral compounds , four Kepler-Poinsot polyhedra , and thirteen Archimedean solids , constructing or collecting polyhedron models has become a common mathematical recreation. Polyhedron models are found in mathematics classrooms much as globes in geography classrooms.
Polyhedron models are notable as three-dimensional proof-of-concepts of geometric theories. Some polyhedra also make great centerpieces, tree toppers , Holiday decorations, or symbols. The Merkaba religious symbol, for example, is a stellated octahedron . Constructing large models offer challenges in engineering structural design .
Construction begins by choosing a size of the model, either the length of its edges or the height of the model. The size will dictate the material , the adhesive for edges, the construction time and the method of construction .
The second decision involves colours. A single-colour cardboard model is easiest to construct — and some models can be made by folding a pattern, called a net , from a single sheet of cardboard. Choosing colours requires geometric understanding of the polyhedron. One way is to colour each face differently. A second way is to colour all square faces the same, all pentagonal faces the same, and so forth. A third way is to colour opposite faces the same. Many polyhedra are also coloured such that no same-coloured faces touch each other along an edge or at a vertex.
An alternative way for polyhedral compound models is to use a different colour for each polyhedron component.
Net templates are then made. One way is to copy templates from a polyhedron-making book, such as Magnus Wenninger 's Polyhedron Models , 1974 ( ISBN 0-521-09859-9 ). A second way is drawing faces on paper or with computer-aided design software and then drawing on them the polyhedron's edges . The exposed nets of the faces are then traced or printed on template material. A third way is using the software named Stella to print nets.
A model, particularly a large one, may require another polyhedron as its inner structure or as a construction mold. A suitable inner structure prevents the model from collapsing from age or stress.
The net templates are then replicated onto the material, matching carefully the chosen colours. Cardboard nets are usually cut with tabs on each edge, so the next step for cardboard nets is to score each fold with a knife. Panelboard nets, on the other hand, require molds and cement adhesives.
Assembling multi-colour models is easier with a model of a simpler related polyhedron used as a colour guide. Complex models, such as stellations , can have hundreds of polygons in their nets.
Recent computer graphics technologies allow people to rotate 3D polyhedron models on a computer video screen in all three dimensions. Recent technologies even provide shadows and textures for a more realistic effect. | https://en.wikipedia.org/wiki/Polyhedron_model |
Polyhexanide ( polyhexamethylene biguanide , PHMB ) is a polymer used as a disinfectant and antiseptic . In dermatological use, [ 4 ] it is spelled polihexanide ( INN ) and sold under various brand names. [ 5 ] PHMB has been shown to be effective against Pseudomonas aeruginosa , Staphylococcus aureus , Escherichia coli , Candida albicans , Aspergillus brasiliensis , enterococci , and Klebsiella pneumoniae . [ 6 ] Polihexanide, sold under the brand name Akantior is a medication used for the treatment of Acanthamoeba keratitis .
Products containing PHMB are used for inter-operative irrigation, pre- and post-surgery skin and mucous membrane disinfection, post-operative dressings, surgical and non-surgical wound dressings, surgical bath/ hydrotherapy , chronic wounds like diabetic foot ulcer and burn wound management, routine antisepsis during minor incisions, catheterization , first aid, surface disinfection, and linen disinfection. [ 7 ] [ 8 ] PHMB eye drops have been used as a treatment for eyes affected by Acanthamoeba keratitis . [ 9 ]
It is sold as a swimming pool and spa disinfectant in place of chlorine or bromine based products under the name Baquacil.
PHMB is also used as an ingredient in some contact lens cleaning products, cosmetics, personal deodorants and some veterinary products. It is also used to treat clothing (Purista), purportedly to prevent the development of unpleasant odors.
The PHMB hydrochloride salt (solution) is used in the majority of formulations.
Polihexanide is indicated for the treatment of Acanthamoeba keratitis in people aged 12 years of age and older. [ 1 ] [ 2 ]
In May 2024, the Committee for Medicinal Products for Human Use of the European Medicines Agency adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Akantior, intended for the treatment of Acanthamoeba keratitis , a severe, progressive and sight threatening corneal infection characterized by intense pain and photophobia. [ 1 ] [ 10 ] Acanthamoeba keratitis is a rare disease primarily affecting contact lens wearers. [ 1 ] The applicant for this medicinal product is SIFI SPA. [ 1 ] Polihexanide was approved for medical use in the European Union in August 2024. [ 1 ] [ 2 ]
In 2011, polyhexamethylene biguanide was classified as category 2 carcinogen by the European Chemical Agency , but it is still allowed in cosmetics in small quantities if exposure by inhalation is impossible. [ 11 ]
In some sources, particularly when listed as a cosmetics ingredient ( INCI ), the polymer is wrongly named as polyaminopropyl biguanide. [ 12 ] [ 13 ] | https://en.wikipedia.org/wiki/Polyhexanide |
Polyimide foam is a foam originally designed for NASA by Inspec Foams Inc. under the brand name Solimide. [ 1 ] Its primary purposes are as an insulator (such as for rocket fuels ) and acoustic damper. NASA engineered the product to have relatively low outgassing (a problem in vacuum and aboard spacecraft), desirable thermal and acoustic performance, as well as uniformity during distribution and application. Typical uses of the foam include ducting, duct/piping insulation, structural components, and strengthening of hollow components while remaining lightweight. In addition to thermal and acoustic properties, polyimide foam is fire resistant , lightweight and non-toxic.
This chemical process -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Polyimide_foam |
Polyimines are classified as polymer materials that contain imine groups, which are characterised by a double bond between a carbon and nitrogen atom. [ 1 ] The term polyimine can also be found occasionally in covalent organic frameworks (COFs). In (older) literature, polyimines are sometimes also referred to as poly(azomethine) or polyschiff.
Polyimines can be synthesised via a condensation reaction between aldehydes and (primary) amines . [ 2 ] During this reaction, water is also formed as byproduct. Often, the synthesis can be performed at room temperature, but to fully cure the materials and remove remaining water, they can be dried at slightly elevated temperatures and/or in vacuum.
One of the applications of polyimines is as in covalent adaptable networks (CANs). These are polymer materials that are crosslinked via dynamic covalent bonds. Besides polyimines, other types of dynamic covalent chemistry can also be used. [ 3 ] Polyimine CANs are largely investigated to create recyclable and self-healing thermoset materials, [ 4 ] but they can also find use in composite materials with higher performance. [ 5 ]
Flame retardants
Because of the free radical scavenging properties of imines, [ 6 ] they are well fit to be used in flame retardant materials. In addition, different polyimine materials have also been investigated for which phosporous species have been incorporated. These materials represent more sustainable and less harmful alternatives to previously used halogenated polymers.
Sensory devices
The dynamic characteristics of polyimines enables them to be used as sensory devices. An example of this is the sensing of amine compounds. Polyimine materials have been constructed that enable penetration of (small) monoamine molecules. [ 7 ] These amines can perform bond exchange reactions with the polyimine network, and as a result reduce the crosslinking density. As a result, the materials soften or even liquify. The change in material properties provides a "read-out" of the presence of amines.
Electronic skin
Polyimines have been investigated for their use in the production of electronic skins (e-skin). [ 8 ] For this, Polyimine networks were doped with conductive silver nanoparticles . The malleability of the polyimine network enables the e-skin to conform to complex or uneven surfaces without introducing excessive interfacial stresses.
Various studies have been conducted to synthesise bio-based polyimines due the great natural abundance of aldehydes and amines. [ 9 ] Popular sources for aldehydes include vanilin , which can be obtained from lignin , or 2,5-furandicarboxaldehyde (FDC), which can be derived from fructose . [ 10 ]
Apart from polyimine polymers that are formed directly via the condensation reaction from aldehydes and amines, it is also possible to incorporate imines in other existing polymer materials. Imines have, for example, been incorporated into recyclable epoxy-based thermosets [ 11 ] and polyesters. [ 12 ]
Polyimines are commonly abbreviated as PI . However, the same abbreviation is typically used for polyimide . Which has almost the same name, but is a significantly different type of polymer material.
Sometimes the term polyimine is used to describe a material called polyethyleneimine . This material exists in different forms ( i.e. , linear or branched), but does in fact not contain actual imine (C=N) bonds. | https://en.wikipedia.org/wiki/Polyimine |
Polyisobuteneamine ( PIBA ) is a polymer derived from the reaction of polyisobutylene (PIB) with ammonia or primary amines . This polymeric compound is known for its excellent adhesive and dispersant properties and is commonly used as an additive in lubricants , fuel, and other industrial applications.
The history of polyisobuteneamine dates back to the early development and study of polyisobutylene. The first synthesis of polyisobutylene was reported in 1931 by the German chemists Hermann Staudinger and Leonidas Zechmeister , who obtained the polymer through the cationic polymerization of isobutylene . [ 1 ] The discovery of polyisobuteneamine followed as researchers began to explore the potential applications of polyisobutylene and its derivatives.
Polyisobuteneamine is synthesized through the reaction of polyisobutylene with ammonia or primary amines in the presence of a catalyst. The reaction takes place at elevated temperatures and pressures. The molecular weight of the resulting polymer can be controlled by adjusting the reaction conditions and the choice of catalyst.
Polyisobutylene (PIB): (CH 2 =C(CH 3 ) 2 ) n Ammonia (NH 3 ) or Primary amine (RNH 2 )
Polyisobuteneamine (PIBA): [-(CH 2 -C(CH 3 ) 2 )N(H)-] m
In the chemical formulas above, n represents the degree of polymerization of PIB, R represents a hydrogen atom (in the case of ammonia) or an alkyl group (in the case of primary amines), and m is the degree of substitution of the amine group on the polyisobutylene backbone.
Polyisobuteneamine is a viscous liquid with a yellow to amber color. It has excellent adhesion and dispersant properties, which are attributed to its polar amine groups and nonpolar polyisobutylene backbone. [ 2 ] The unique combination of polar and nonpolar groups allows PIBA to interact with a wide range of materials, making it a versatile additive. [ 3 ]
Polyisobuteneamine is commonly used as an additive in lubricants, fuel, and other industrial applications. Its adhesive and dispersant properties make it particularly useful in enhancing the performance of engine oils , gear oils , and hydraulic fluids . PIBA is also used in fuel additives to improve the combustion process and reduce deposits in the engine. Other applications include the use of PIBA as a corrosion inhibitor , an emulsifier , and a demulsifier in various industrial processes. | https://en.wikipedia.org/wiki/Polyisobuteneamine |
In statistics , a polykay , or generalised k-statistic , (denoted k r , s {\displaystyle k_{r,s}} ) is a statistic defined as a linear combination of sample moments . [ 1 ]
The word polykay was coined by American mathematician John Tukey in 1956, from poly , "many" or "much", and kay , the phonetic spelling of the letter "k", as in k-statistic . [ 2 ]
This statistics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Polykay |
In mathematics , a polylogarithmic function in n is a polynomial in the logarithm of n , [ 1 ]
The notation log k n is often used as a shorthand for (log n ) k , analogous to sin 2 θ for (sin θ ) 2 .
In computer science , polylogarithmic functions occur as the order of time for some data structure operations. Additionally, the exponential function of a polylogarithmic function produces a function with quasi-polynomial growth , and algorithms with this as their time complexity are said to take quasi-polynomial time . [ 2 ]
All polylogarithmic functions of n are o( n ε ) for every exponent ε > 0 (for the meaning of this symbol, see small o notation ), that is, a polylogarithmic function grows more slowly than any positive exponent. This observation is the basis for the soft O notation Õ( n ) . [ 3 ]
This mathematical analysis –related article is a stub . You can help Wikipedia by expanding it .
This polynomial -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Polylogarithmic_function |
Polylogism is the belief that different groups of people reason in fundamentally different ways (coined from Greek poly 'many' + logos ' logic '). [ 1 ] The term is attributed to Ludwig von Mises , [ 2 ] who used it to refer to Nazism , Marxism and other class based social philosophies , [ 3 ] before the writings of Thomas Kuhn and others made relativism a mainstream doctrine. [ 4 ] In the Misesian sense of the term, a polylogist ascribes different forms of "logic" to different groups, which may include groups based on race , [ 1 ] [ 5 ] gender , class , or time period . It does not refer strictly to Boolean logic.
A polylogist asserts that different groups reason in fundamentally distinct ways, employing unique "logics" for deductive reasoning . Normative polylogism posits that these varying logics are equally valid, suggesting that no single logical system holds supremacy over others. In contrast, descriptive polylogism is an empirical claim that acknowledges the existence of different reasoning methods among groups but does not necessarily grant equal validity to these methods. [ 6 ] A descriptive polylogist may recognize a universally valid form of deductive logic while empirically noting that some groups use alternative (and potentially incorrect) reasoning strategies.
In the Misesian context, an adherent of polylogism would be considered a normative polylogist. Such a person might evaluate an argument as valid within a specific logical framework, even if it contradicts the logic used by the analyst. As Ludwig von Mises stated, "this never has been and never can be attempted by anybody," highlighting the inherent challenges in reconciling different logical systems.
The term 'proletarian logic' is sometimes taken as evidence of polylogism. [ citation needed ] This term is usually traced back to Joseph Dietzgen in his 11th letter on logic. [ 7 ] [ 8 ] Dietzgen is the now obscure philosophical monist of the 19th century who coined the term 'dialectical materialism' and was praised by communist figures such as Karl Marx and V. I. Lenin . [ 9 ] His work has received modern attention primarily from the philosopher Bertell Ollman . As a monist, Dietzgen insists on a unified treatment of mind and matter. As Simon Boxley puts it, for Dietzgen "thought is as material an event as any other". This means that logic too has "material" underpinnings. [ further explanation needed ] (But note that Dietzgen's "materialism" was explicitly not a physicalism .)
Racialist polylogism is often associated with the Nazi period , [ 10 ] where Nazi leaders in both politics and the scientific community made concerted efforts to distinguish between what they considered " German physics " and " Jewish physics. " [ 11 ] For example, Nobel Prize -winning physicist Philipp Lenard asserted that scientific thought was influenced by "blood and race," accusing other scientists like Werner Heisenberg of teaching "Jewish physics." This racialist perspective sought to delegitimize the work of Jewish scientists, such as Albert Einstein , whose theory of relativity was disparaged as a product of inferior racial heritage.
"Relativity theory was a particular target both for its alleged repudiation of a “classical,” “German,” and “Aryan” physics, which was held to be rooted in experiment and common sense, and for its alleged encouragement of a more general relativism in morality, culture, and politics." [ 12 ]
In contemporary discourse, similar accusations of racialist polylogism have surfaced in various contexts. For instance, U.S. Supreme Court Justice Sonia Sotomayor has been accused of espousing a form of racialist polylogism when she suggested that a "wise Latina" might reach different legal conclusions than a white male. While this comment is generally interpreted to mean that diverse life experiences can enrich one's understanding of legal issues, some commentators have argued that it implies Latinas have a distinct "logic." [ 13 ] [ 14 ]
Karl Marx argued that individuals born into different social classes undergo irreversible changes in their perception and understanding of reality. He posited that a person's class position fundamentally shapes their worldview and consciousness. For instance, someone raised as an aristocrat or factory owner perceives the world through the interests and perspectives inherent to their class. In contrast, a laborer develops a perspective shaped by their experiences and struggles within the working class . Marx believed this divergence in class perspectives leads to a lack of mutual understanding or ' class consciousness .' Consequently, individuals from different classes are often unable to fully grasp each other's experiences and viewpoints, resulting in distinct 'logics' that align with their respective class interests . In Mathematical Manuscript Marx attempted to reconstruct the foundations of calculus without relying on traditional methods, demonstrating his belief that different historical and social conditions could lead to different approaches in even the most abstract fields of thought. This suggests that this class-based differentiation extends even to areas like mathematics and logic , where different classes might reach different conclusions based on their material conditions and class interests. [ 15 ]
Marx's dialectical method, which he used in his critique of political economy , highlights a difference between formal logic, which he associated with bourgeois thought, and dialectical logic , which he saw as more aligned with a revolutionary understanding of societal change . Dialectical logic involves understanding contradictions within social systems, a concept that he argued was often neglected or misunderstood by conventional, formal logic. [ 15 ]
While Marx did not directly claim that different classes would produce different logical systems , his writings suggest that he believed social and historical conditions significantly influence intellectual frameworks, including in areas like mathematics and logic. This nuanced perspective aligns with his broader critique of how ideology and material conditions shape human thought.
Some proponents of polylogism argue that different groups may indeed develop distinct scientific theories and frameworks, drawing on the work of Thomas Kuhn in " The Structure of Scientific Revolutions ." Kuhn introduced the concept of paradigm shifts , suggesting that scientific progress is not a linear accumulation of knowledge but rather occurs through revolutionary changes in paradigms. According to this view, a paradigm encompasses the accepted theories, methods, and standards within a scientific community, and when a paradigm shift occurs, the new framework is often incommensurable with the old one—meaning that the two paradigms cannot be directly compared or reconciled. [ 16 ]
In this context, proponents of polylogism argue that different cultural, social, or ideological groups may operate under entirely distinct paradigms, leading to divergent scientific theories and understandings. The incommensurability of these paradigms implies that what one group considers scientific truth may not be seen as such by another, as each group’s theories are deeply embedded in their specific conceptual frameworks and assumptions. Therefore, they suggest that scientific theories can indeed be different for different groups, not merely as a matter of interpretation but as fundamentally distinct ways of understanding the world.
The two ideas are not mutually exclusive, however, as Kuhn's concept of the incommensurability of different paradigms differs from the Misesian notion of polylogism. Kuhn's idea suggests that scientists working within different paradigms are often unable to fully understand or evaluate each other's work due to differing foundational assumptions. In contrast, Mises' attack on polylogism refers to the belief that different groups, such as races or classes, think differently. The Nazis did not reject Einstein's work because they had a fundamentally different scientific framework; rather, they dismissed his conclusions because they believed that, as a Jew, he was inherently incapable of sound reasoning. [ 11 ] This was not a matter of different scientific paradigms but of a prejudiced ideology that disregarded the validity of his work based on racial grounds.
To use Kuhn’s terminology, one could frame the Misesian concept of polylogism as the belief that members of different races or classes are inherently unable to contribute effectively to solving puzzles within the framework of ‘normal science,’ due to presumed deficiencies tied to their identity. Polylogists argue that these groups operate under fundamentally different cognitive frameworks, which preclude them from engaging in the same scientific paradigm as others. Alternatively, one might argue a certain group or classes innalienable traits force them to be stuck in certain paradigm’s long surpassed by more superior groups. | https://en.wikipedia.org/wiki/Polylogism |
Polylysine refers to several types of lysine homopolymers , which may differ from each other in terms of stereochemistry (D/L; the L form is natural and usually assumed) and link position (α/ε). Of these types, only ε-poly-L-lysine is produced naturally.
The precursor amino acid lysine contains two amino groups , one at the α-carbon and one at the ε-carbon. Either can be the location of polymerization , resulting in α-polylysine or ε-polylysine. Polylysine is a homopolypeptide belonging to the group of cationic polymers : at pH 7, polylysine contains a positively charged hydrophilic amino group.
α-Polylysine is a synthetic polymer, which can be composed of either L -lysine or D -lysine. "L" and "D" refer to the chirality at lysine's central carbon. This results in poly- L -lysine (PLL) and poly- D -lysine (PDL) respectively. [ 1 ]
ε-Polylysine (ε-poly- L -lysine, EPL) is typically produced as a homopolypeptide of approximately 25–30 L -lysine residues. [ 2 ] According to research, ε-polylysine is adsorbed electrostatically to the cell surface of the bacteria, followed by a stripping of the outer membrane . This eventually leads to the abnormal distribution of the cytoplasm causing damage to the bacterial cell [ 3 ] that is produced by bacterial fermentation. ε-Poly- L -lysine is used as a natural preservative in food products.
Production of polylysine by natural fermentation is only observed in strains of bacteria in the genus Streptomyces . Streptomyces albulus is most often used in scientific studies and is also used for the commercial production of ε-polylysine.
α-Polylysine is synthetically produced by a basic polycondensation reaction. [ 5 ]
The production of ε-polylysine by natural fermentation was first described by researchers Shoji Shima and Heiichi Sakai in 1977. [ 2 ] Since the late 1980s, ε-polylysine has been approved by the Japanese Ministry of Health, Labour and Welfare as a preservative in food. In January 2004, ε-polylysine became generally recognized as safe (GRAS) certified in the United States. [ 6 ]
ε-Polylysine is used commercially as a food preservative in Japan, Korea and in imported items sold in the United States. Food products containing polylysine are mainly found in Japan. The use of polylysine is common in food applications such as boiled rice, cooked vegetables, soups, noodles and sliced fish ( sushi ). [ 7 ]
Literature studies have reported an antimicrobial effect of ε-polylysine against yeast , fungi , Gram-positive bacteria and Gram-negative bacteria . [ 8 ]
Polylysine has a light yellow appearance and is slightly bitter in taste whether in powder or liquid form.
α-Polylysine is commonly used to coat tissue cultureware as an attachment factor which improves cell adherence. This phenomenon is based on the interaction between the positively charged polymer and negatively charged cells or proteins. While the poly- L -lysine (PLL) precursor amino acid occurs naturally, the poly- D -lysine (PDL) precursor is an artificial product. The latter is therefore thought to be resistant to enzymatic degradation and so may prolong cell adherence. [ 9 ]
Polylysine exhibits high positive charge density which allows it to form soluble complexes with negatively charged macromolecules . [ 10 ] Polylysine homopolymers or block copolymers have been widely used for delivery of DNA [ 11 ] and proteins. [ 12 ] Polylysine-based nanoparticles have also been shown to passively accumulate in the injured sites of blood vessels after stroke due to incorporation into newly formed thrombus , [ 13 ] which offers a new way to deliver therapeutic agents specifically to the sites of injury after vascular damage.
In 2010, hydrophobically modified ε-polylysine was synthesized by reacting EPL with octenyl succinic anhydride (OSA). [ 14 ] It was found that OSA-g-EPLs had glass transition temperatures lower than EPL. They were able to form polymer micelles in water and to lower the surface tension of water, confirming their amphiphilic properties. The antimicrobial activities of OSA-g-EPLs were also examined, and the minimum inhibitory concentrations of OSA-g-EPLs against Escherichia coli O157:H7 remained the same as that of EPL. Therefore, modified EPLs have the potential of becoming bifunctional molecules, which can be used either as surfactants or emulsifiers in the encapsulation of water-insoluble drugs or as antimicrobial agents. | https://en.wikipedia.org/wiki/Polylysine |
polymake is a software for the algorithmic treatment of convex polyhedra . [ 1 ]
Albeit primarily a tool to study the combinatorics and the geometry of convex polytopes and polyhedra , [ 2 ] it is by now also capable of dealing with simplicial complexes , matroids , polyhedral fans, graphs , tropical objects, toric varieties and other objects. In particular, its capability to compute the convex hull and lattice points of a polytope proved itself to be quite useful for different kinds of research. [ 3 ]
polymake has been cited in over 300 recent articles indexed by Zentralblatt MATH as can be seen from its entry in the swMATH database. [ 4 ]
polymake exhibits a few particularities, making it special to work with.
Firstly, polymake can be used within a Perl script. Moreover, users can extend polymake and define new objects, properties, rules for computing properties, and algorithms. [ 5 ]
Secondly, it exhibits an internal client-server scheme to accommodate the usage of Perl for object management and interfaces as well as C++ for mathematical algorithms. [ 6 ] The server holds information about each object (e.g., a polytope), and the client sends requests to compute properties. The server has the job of determining how to complete each request from information already known about each object using a rule-based system. For example, there are many rules on how to compute the facets of a polytope. Facets can be computed from a vertex description of the polytope, and from a (possibly redundant) inequality description. polymake builds a dependency graph outlining the steps to process each request and selects the best path via a Dijkstra-type algorithm. [ 6 ]
polymake divides its collection of functions and objects into 10 different groups called applications. They behave like C++ namespaces. The polytope application was the first one developed and it is the largest. [ 7 ]
polymake version 1.0 first appeared in the proceedings of DMV-Seminar "Polytopes and Optimization" held in Oberwolfach, November 1997. [ 2 ] Version 1.0 only contained the polytope application, but the system of "applications" was not yet developed. Version 2.0 was in July 2003, [ 17 ] and version 3.0 was released in 2016. [ 18 ] The last big revision, version 4.0, was released in January 2020. [ 19 ]
polymake is highly modularly built and, therefore, displays great interaction with third party software packages for specialized computations, thereby providing a common interface and bridge between different tools. A user can easily (and unknowingly) switch between using different software packages in the process of computing properties of a polytope. [ 20 ]
Below is a list of third-party software packages that polymake can interface with as of version 4.0. Users are also able to write new rule files for interfacing with any software package. Note that there is some redundancy in this list (e.g., a few different packages can be used for finding the convex hull of a polytope). Because polymake uses rule files and a dependency graph for computing properties, [ 5 ] most of these software packages are optional. However, some become necessary for specialized computations. | https://en.wikipedia.org/wiki/Polymake |
Polymelia is a birth defect in which an affected individual has more than the usual number of limbs. It is a type of dysmelia . In humans and most land-dwelling vertebrates, this means having five or more limbs. The extra limb is most commonly shrunken and/or deformed. The term is from Greek πολυ- "many", μέλεα "limbs".
Sometimes an embryo started as conjoined twins , but one twin degenerated completely except for one or more limbs, which end up attached to the other twin.
Sometimes small extra legs between the normal legs are caused by the body axis forking in the dipygus condition.
Notomelia (from Greek for "back-limb-condition") is polymelia where the extra limb is rooted along or near the midline of the back. [ citation needed ] Notomelia has been reported in Angus cattle often enough to be of concern to farmers. [ 1 ]
Cephalomelia (from Greek for "head-limb-condition") is polymelia where the extra limb is rooted on the head. [ 2 ]
Tetrapod legs evolved in the Devonian or Carboniferous geological period from the pectoral fins and pelvic fins of their crossopterygian fish ancestors. Fish fins develop along a "fin line", which runs from the back of the head along the midline of the back, round the end of the tail, and forwards along the underside of the tail, and at the cloaca splits into left and right fin lines which run forwards to the gills. In the paired ventral part of the fin line, normally only the pectoral and pelvic fins survive (but the Devonian acanthodian fish Mesacanthus developed a third pair of paired fins); but along the non-paired parts of the fin line, other fins develop.
In tetrapods , only the four paired fins normally persisted, and became the four legs. Notomelia and cephalomelia are atavistic reappearances of dorsal fins . Some other cases of polymelia are extra development along the paired part of the fin lines, or along the ventral posterior non-paired part of the fin line.
Many mythological creatures like dragons , winged horses , and griffins have six limbs: four legs and two wings. The dragon's science is discussed in Dragons: A Fantasy Made Real . Additionally, angels are often depicted with two arms, two legs, and two wings.
In Greek Mythology :
Sleipnir , Odin's horse in Norse mythology , has eight normal horse legs, and is usually depicted with limbs twinned at the shoulder or hip.
Several Hindu deities are depicted with multiple arms and sometimes also multiple legs. | https://en.wikipedia.org/wiki/Polymelia |
A polymer -based battery uses organic materials instead of bulk metals to form a battery. [ 1 ] Currently accepted metal-based batteries pose many challenges due to limited resources, negative environmental impact, and the approaching limit of progress. Redox active polymers are attractive options for electrodes in batteries due to their synthetic availability, high-capacity, flexibility, light weight, low cost, and low toxicity. [ 2 ] Recent studies have explored how to increase efficiency and reduce challenges to push polymeric active materials further towards practicality in batteries. Many types of polymers are being explored, including conductive, non-conductive, and radical polymers. Batteries with a combination of electrodes (one metal electrode and one polymeric electrode) are easier to test and compare to current metal-based batteries, however batteries with both a polymer cathode and anode are also a current research focus. Polymer-based batteries, including metal/polymer electrode combinations, should be distinguished from metal-polymer batteries, such as a lithium polymer battery , which most often involve a polymeric electrolyte , as opposed to polymeric active materials.
Organic polymers can be processed at relatively low temperatures, lowering costs. They also produce less carbon dioxide. [ 3 ]
Organic batteries are an alternative to the metal reaction battery technologies, and much research is taking place in this area.
An article titled "Plastic-Metal Batteries: New promise for the electric car" [ 4 ] wrote in 1982: "Two different organic polymers are being investigated for possible use in batteries" and indicated that the demo he gave was based on work begun in 1976.
Waseda University was approached by NEC in 2001, and began to focus on the organic batteries. In 2002, NEC researcher presented a paper on Piperidinoxyl Polymer technology, and by 2005 they presented an organic radical battery (ORB) based on a modified PTMA, poly(2,2,6,6-tetramethylpiperidinyloxy-4-yl meth-acrylate). [ 5 ]
In 2006, Brown University announced a technology based on polypyrrole . [ 1 ] In 2007, Waseda announced a new ORB technology based on "soluble polymer, polynorborene with pendant nitroxide radical groups."
In 2015 researchers developed an efficient, conductive, electron-transporting polymer. The discovery employed a "conjugated redox polymer" design with a naphthalene - bithiophene polymer that has been used for transistors and solar cells. Doped with lithium ions it offered significant electronic conductivity and remained stable through 3,000 charge/discharge cycles. Polymers that conduct holes have been available for some time. The polymer exhibits the greatest power density for an organic material under practical measurement conditions. A battery could be 80% charged within 6 seconds. Energy density remained lower than inorganic batteries. [ 3 ]
Like metal-based batteries, the reaction in a polymer-based battery is between a positive and a negative electrode with different redox potentials . An electrolyte transports charges between these electrodes. For a substance to be a suitable battery active material, it must be able to participate in a chemically and thermodynamically reversible redox reaction. Unlike metal-based batteries, whose redox process is based on the valence charge of the metals, the redox process of polymer-based batteries is based on a change of state of charge in the organic material. [ 6 ] For a high energy density, the electrodes should have similar specific energies . [ 6 ]
The active organic material could be a p-type, n-type, or b-type . During charging, p-type materials are oxidized and produce cations, while n-types are reduced and produce anions. B-type organics could be either oxidized or reduced during charging or discharging. [ 6 ]
In a commercially available Li-ion battery, the Li+ ions are diffused slowly due to the required intercalation and can generate heat during charge or discharge. Polymer-based batteries, however, have a more efficient charge/discharge process, resulting in improved theoretical rate performance and increased cyclability. [ 3 ]
To charge a polymer-based battery, a current is applied to oxidize the positive electrode and reduce the negative electrode. The electrolyte salt compensates the charges formed. The limiting factors upon charging a polymer-based battery differ from metal-based batteries and include the full oxidation of the cathode organic, full reduction of the anode organic, or consumption of the electrolyte. [ 3 ]
Upon discharge, the electrons go from the anode to cathode externally, while the electrolyte carries the released ions from the polymer. This process, and therefore the rate performance, is limited by the electrolyte ion travel and the electron-transfer rate constant , k 0 , of the reaction.
This electron transfer rate constant provides a benefit of polymer-based batteries, which typically have high values on the order of 10 −1 cm s −1 . The organic polymer electrodes are amorphous and swollen, which allows for a higher rate of ionic diffusion and further contributes to a better rate performance. [ 3 ] Different polymer reactions, however, have different reaction rates. While a nitroxyl radical has a high reaction rate, organodisulfades have significantly lower rates because bonds are broken and new bonds are formed. [ 7 ]
Batteries are commonly evaluated by their theoretical capacity (the total capacity of the battery if 100% of active material were utilized in the reaction). This value can be calculated as follows:
C t ( m A h g − 1 ) = m n F M {\displaystyle C_{t}(mA\ h\ g^{-1})={\frac {mnF}{M}}}
where m is the total mass of active material, n is the number of transferred electrons per molar mass of active material, M is the molar mass of active material, and F is Faraday's constant. [ 8 ]
Most polymer electrodes are tested in a metal-organic battery for ease of comparison to metal-based batteries. In this testing setup, the metal acts as the anode and either n- or p-type polymer electrodes can be used as the cathode. When testing the n-type organic, this metal-polymer battery is charged upon assembly and the n-type material is reduced during discharge, while the metal is oxidized. For p-type organics in a metal-polymer test, the battery is already discharged upon assembly. During initial charging, electrolyte salt cations are reduced and mobilized to the polymeric anode while the organic is oxidized. During discharging, the polymer is reduced while the metal is oxidized to its cation. [ 3 ]
Conductive polymers can be n-doped or p-doped to form an electrochemically active material with conductivity due to dopant ions on a conjugated polymer backbone. [ 9 ] [ 2 ] Conductive polymers (i.e. conjugated polymers) are embedded with the redox active group, as opposed to having pendant groups , with the exception of sulfur conductive polymers . [ 2 ] They are ideal electrode materials due to their conductivity and redox activity, therefore not requiring large quantities of inactive conductive fillers. [ 10 ] However they also tend to have low coulombic efficiency and exhibit poor cyclability and self-discharge. [ 7 ] Due to the poor electronic separation of the polymer's charged centers, the redox potentials of conjugated polymers change upon charge and discharge due to a dependence on the dopant levels. As a result of this complication, the discharge profile (cell voltage vs. capacity) of conductive polymer batteries has a sloped curve. [ 3 ]
Conductive polymers struggle with stability due to high levels of charge, failing to reach the ideal of one charge per monomer unit of polymer. Stabilizing additives can be incorporated, but these decrease the specific capacity. [ 3 ]
Despite the conductivity advantage of conjugated polymers, their many drawbacks as active materials have furthered the exploration of polymers with redox active pendant groups. Groups frequently explored include carbonyls , carbazoles , organosulfur compounds , viologen , and other redox-active molecules with high reactivity and stable voltage upon charge and discharge. [ 2 ] These polymers present an advantage over conjugated polymers due to their localized redox sites and more constant redox potential over charge/discharge. [ 3 ]
Carbonyl compounds have been heavily studied, and thus present an advantage, as new active materials with carbonyl pendant groups can be achieved by many different synthetic properties. Polymers with carbonyl groups can form multivalent anions. Stabilization depends on the substituents; vicinal carbonyls are stabilized by enolate formation, aromatic carbonyls are stabilized by delocalization of charge, and quinoidal carbonyls are stabilized by aromaticity. [ 3 ]
Sulfur is one of earth's most abundant elements and thus are advantageous for active electrode materials. Small molecule organosulfur active materials exhibit poor stability, which is partially resolved via incorporation into a polymer. In disulfide polymers, electrochemical charge is stored in a thiolate anion, formed by a reversible two-electron oxidation of the disulfide bond. Electrochemical storage in thioethers is achieved by the two-electron oxidation of a neutral thioether to a thioether with a +2 charge. As active materials, however, organosulfur compounds, however, exhibit weak cyclability. [ 3 ]
Polymeric electrodes in organic radical batteries are electrochemically active with stable organic radical pendant groups that have an unpaired electron in the uncharged state. [ 11 ] Nitroxide radicals are the most commonly applied, though phenoxyl and hydrazyl groups are also often used. [ 3 ] A nitroxide radical could be reversibly oxidized and the polymer p-doped, or reduced, causing n-doping. Upon charging, the radical is oxidized to an oxoammonium cation, and at the cathode, the radical is reduced to an aminoxyl anion. [ 12 ] These processes are reversed upon discharge, and the radicals are regenerated. [ 11 ] For stable charge and discharge, both the radical and doped form of the radical must be chemically stable. [ 12 ] These batteries exhibit excellent cyclability and power density, attributed to the stability of the radical and the simple one-electron transfer reaction. Slight decrease in capacity after repeated cycling is likely due to a build up of swollen polymer particles which increase the resistance of the electrode. Because the radical polymers are considerably insulating, conductive additives are often added that which lower the theoretical specific capacity. Nearly all organic radical batteries feature a nearly constant voltage during discharge, which is an advantage over conductive polymer batteries. [ 11 ] The polymer backbone and cross-linking techniques can be tuned to minimize the solubility of the polymer in the electrolyte, thereby minimizing self-discharge. [ 11 ]
400 (n-doping)
3.0-0.0
580 after 90 cycles
During discharge, conductive polymers have a sloping voltage that hinders their practical applications. This sloping curve indicates electrochemical instability which could be due to morphology, size, the charge repulsions within the polymer chain during the reaction, or the amorphous state of polymers.
Electrochemical performance of polymer electrodes is affected by polymer size, morphology, and degree of crystallinity. [ 14 ] In a polypyrrole (PPy)/Sodium ion hybrid battery, a 2018 study demonstrated that the polymer anode with a fluffy structure consisting of chains of submicron particles performed with a much higher capacity (183 mAh g −1 ) as compared to bulk PPy (34.8 mAh g −1 ). [ 15 ] The structure of the submicron polypyrrole anode allowed for increased electrical contact between the particles, and the electrolyte was able to further penetrate the polymeric active material. It has also been reported that amorphous polymeric active materials performs better than the crystalline counterpart. In 2014, it was demonstrated that crystalline oligopyrene exhibited a discharge capacity of 42.5 mAh g −1 , while the amorphous oligopyrene has a higher capacity of 120 mAh g −1 . Further, the crystalline version experienced a sloped charge and discharge voltage and considerable overpotential due to slow diffusion of ClO 4 − . The amorphous oligopyrene had a voltage plateau during charge and discharge, as well as significantly less overpotential. [ 16 ]
The molecular weight of polymers effects their chemical and physical properties, and thus the performance of a polymer electrode. A 2017 study evaluated the effect of molecular weight on electrochemical properties of poly(TEMPO methacrylate) (PTMA). [ 17 ] By increasing the monomer to initiator ratio from 50/1 to 1000/1, five different sizes were achieved from 66 to 704 degrees of polymerization. A strong dependence on molecular weight was established, as the higher the molecular weight polymers exhibited a higher specific discharge capacity and better cyclability. This effect was attributed to a reciprocal relationship between molecular weight and solubility in the electrolyte. [ 17 ]
Polymer-based batteries have many advantages over metal-based batteries. The electrochemical reactions involved are more simple, and the structural diversity of polymers and method of polymer synthesis allows for increased tunability for desired applications. [ 2 ] [ 3 ] While new types of inorganic materials are difficult to find, new organic polymers can be much more easily synthesized. [ 7 ] Another advantage is that polymer electrode materials may have lower redox potentials, but they have a higher energy density than inorganic materials. And, because the redox reaction kinetics for organics is higher than that for inorganics, they have a higher power density and rate performance. Because of the inherent flexibility and light weight of organic materials as compared to inorganic materials, polymeric electrodes can be printed, cast, and vapor deposited, enabling application in thinner and more flexible devices. Further, most polymers can be synthesized at low cost or extracted from biomass and even recycled, while inorganic metals are limited in availability and can be harmful to the environment. [ 7 ]
Organic small molecules also possess many of these advantages, however they are more susceptible to dissolving in the electrolyte. Polymeric organic active materials less easily dissolve and thus exhibit superior cyclability. [ 7 ]
Though superior in this sense to small organic molecules, polymers still exhibit solubility in electrolytes, and battery stability is threatened by dissolved active material that can travel between electrodes, leading to decreased cyclability and self-discharge, which indicates weaker mechanical capacity. This issue can be lessened by incorporating the redox-active unit in the polymeric backbone, but this can decrease the theoretical specific capacity and increase electrochemical polarization. [ 3 ] [ 7 ] Another challenge is that besides conductive polymers, most polymeric electrodes are electrically insulating and therefore require conductive additives, reducing the battery's overall capacity. While polymers do have a low mass density, they have a greater volumetric energy density which in turn would require an increase in volume of devices being powered. [ 7 ]
A 2009 study evaluated the safety of a hydrophilic radical polymer and found that a radical polymer battery with an aqueous electrolyte is nontoxic, chemically stable, and non-explosive, and is thus a safer alternative to traditional metal-based batteries. [ 3 ] [ 18 ] Aqueous electrolytes present a safer option over organic electrolytes which can be toxic and can form HF acid. The one-electron redox reaction of a radical polymer electrode during charging generates little heat and therefore has a reduced risk of thermal runaway . [ 3 ] Further studies are required to fully understand the safety of all polymeric electrodes. | https://en.wikipedia.org/wiki/Polymer-based_battery |
Polymer-bonded explosives , also called PBX or plastic-bonded explosives , are explosive materials in which explosive powder is bound together in a matrix using small quantities (typically 5–10% by weight) of a synthetic polymer . PBXs are normally used for explosive materials that are not easily melted into a casting, or are otherwise difficult to form.
PBX was first developed in 1952 at Los Alamos National Laboratory , as RDX embedded in polystyrene with diisooctyl phthalate (DEHP) plasticizer . HMX compositions with teflon -based binders were developed in 1960s and 1970s for gun shells and for Apollo Lunar Surface Experiments Package (ALSEP) seismic experiments, [ 1 ] although the latter experiments are usually cited as using hexanitrostilbene (HNS). [ 2 ]
Polymer-bonded explosives have several potential advantages:
Fluoropolymers are advantageous as binders due to their high density (yielding high detonation velocity ) and inert chemical behavior (yielding long shelf stability and low aging ). They are somewhat brittle, as their glass transition temperature is at room temperature or above. This limits their use to insensitive explosives (e.g. TATB ) where the brittleness does not have detrimental effects on safety. They are also difficult to process. [ 4 ]
Elastomers have to be used with more mechanically sensitive explosives like HMX . The elasticity of the matrix lowers sensitivity of the bulk material to shock and friction; their glass transition temperature is chosen to be below the lower boundary of the temperature working range (typically below -55 °C). Crosslinked rubber polymers are however sensitive to aging, mostly by action of free radicals and by hydrolysis of the bonds by traces of water vapor. Rubbers like Estane or hydroxyl-terminated polybutadiene (HTPB) are used for these applications extensively. Silicone rubbers and thermoplastic polyurethanes are also in use. [ 4 ]
Fluoroelastomers , e.g. Viton , combine the advantages of both.
Energetic polymers (e.g. nitro or azido derivates of polymers) can be used as a binder to increase the explosive power in comparison with inert binders. Energetic plasticizers can be also used. The addition of a plasticizer lowers the sensitivity of the explosive and improves its processibility. [ 1 ]
Explosive yields can be affected by the introduction of mechanical loads or the application of temperature; such damages are called insults . The mechanism of a thermal insult at low temperatures on an explosive is primarily thermomechanical, at higher temperatures it is primarily thermochemical.
Thermomechanical mechanisms involve stresses by thermal expansion (namely differential thermal expansions, as thermal gradients tend to be involved), melting/freezing or sublimation/condensation of components, and phase transitions of crystals (e.g. transition of HMX from beta phase to delta phase at 175 °C involves a large change in volume and causes extensive cracking of its crystals).
Thermochemical changes involve decomposition of the explosives and binders, loss of strength of binder as it softens or melts, or stiffening of the binder if the increased temperature causes crosslinking of the polymer chains. The changes can also significantly alter the porosity of the material, whether by increasing it (fracturing of crystals, vaporization of components) or decreasing it (melting of components). The size distribution of the crystals can be also altered, e.g. by Ostwald ripening . Thermochemical decomposition starts to occur at the crystal nonhomogeneities, e.g. intragranular interfaces between crystal growth zones, on damaged parts of the crystals, or on interfaces of different materials (e.g. crystal/binder). Presence of defects in crystals (cracks, voids, solvent inclusions...) may increase the explosive's sensitivity to mechanical shocks. [ 4 ] | https://en.wikipedia.org/wiki/Polymer-bonded_explosive |
Polymer-fullerene bulk heterojunction solar cells are a type of solar cell researched in academic laboratories. Polymer-fullerene solar cells are a subset of organic solar cells, also known as organic photovoltaic (OPV) cells, which use organic materials as their active component to convert solar radiation into electrical energy. The polymer, which functions as the donor material in these solar cells, and fullerene derivatives, which function as the acceptor material (such as PCBM, or phenyl-C61-butyric acid methyl ester), are essential components. [ 2 ] Specifically, fullerene derivatives act as electron acceptors for donor materials like P3HT (poly-3-hexyl thiophene-2,5-diyl), creating a polymer-fullerene based photovoltaic cell . [ 3 ] The Polymer-fullerene BHJ forms two channels for transferring electrons and holes to the corresponding electrodes, as opposed to the planar architecture when the Acceptor (A) and Donor (D) materials were sequentially stacked on top of each other and could selectively touch the cathode and anode electrodes. Hence, the D and A domains are expected to form a bi-continuous network with Nano-scale morphology for efficient charge transport and collection after exciton dissociation. Therefore, in the BHJ device architecture, a mixture of D and A molecules in the same or different solvents was used to form a bi-continual layer, which serves as the active layer of the device that absorbs light for exciton generation. The bi-continuous three-dimensional interpenetrating network of the BHJ design generates a greater D-A interface, which is necessary for effective exciton dissociation in the BHJ due to short exciton diffusion. [ 4 ] When compared to the prior bilayer design, photo-generated excitons may dissociate into free holes and electrons more effectively, resulting in better charge separation for improved performance of the cell.
Photovoltaic cells featuring a polymeric blend of organics have shown promise in a field largely dominated by inorganic (e.g. silicon ) solar cells. Some of the improvements that organic solar cells have over inorganic solar cells are that they are flexible and therefore can be applied to a larger range of surfaces. [ 5 ] They can also be produced much more easily via inkjet printing or spray deposition , and therefore are vastly cheaper to manufacture. [ 6 ] A downside is that, because they are not crystalline (like silicon ), but instead are produced in a purposely disordered blend of electron-acceptor and -donor materials (hence the name bulk heterojunction), they have a limited efficiency of charge transport . [ 7 ]
However, the efficiencies of these new types of photovoltaic cells have risen from 2.5% in 2001, to 5% in 2006, to greater than 10% in 2011. [ 8 ] This is because improved methods for solution processing of acceptor and donor materials led to more efficient blending of the two materials. Further research can lead to polymer-fullerene based photovoltaic cells that approach the efficiency of current inorganic photovoltaic cells.
Materials used in polymer-based photovoltaic cells are characterized by their total electron affinities and absorption power. The electron-rich, donor materials tend to be conjugated polymers with relatively high absorption power, whereas the acceptor in this case is a highly symmetric fullerene molecule with a strong affinity for electrons, ensuring sufficient electron mobility between the two. [ 5 ]
The arrangement of materials essentially determines the overall efficiency of the heterojunction solar cell. There are three donor-acceptor bulk morphologies: (a) the bilayer, (b) the bulk heterojunction, and (c) the "comb" structure. Typically, a polymer-fullerene bulk heterojunction solar cell has a layered structure.
Working principle of the Fullerene based BHJ OPVs device involves four fundamental steps namely (i) photons absorption and exciton creation, (ii) exciton diffusion and splitting at the D-A interface, (iii) charge transportation, and, (iv) charge collection. [ 12 ] In a BHJ OPV device, the donor material is the one that absorbs the incoming light. Due to a substantial potential energy drop, the excitons must diffuse to the D-A interface, where they will be split into free charge carriers such as electrons and holes. [ 13 ] There can be some limitations and losses during the device operation steps discussed above, which include absorption loss due to spectral mismatch, thermalization loss, the energy required for exciton splitting might be inefficient, charge recombination loss, etc. [ 14 ]
For Fullerene-based OPV, there are two device architectures in use today: traditional (conventional) and inverted . The BHJ conventional architecture has set a significant milestone in terms of improving efficiencies in OPVs in order to commercialize them. However, due to oxygen and moisture intrusion into the electrodes, as well as damage caused by air or oxidation of the electrodes, the environmental stability of these OPVs remains the most difficult challenge to overcome. To overcome this challenge researchers had established inverted device architecture for BHJ PSCs. In an inverted device, the bottom transparent electrode serves as the cathode while the top electrode is an anode. The inverted devices exhibited higher environmental stability, [ 15 ] and higher efficiencies in most cases in comparison with the conventional architecture of OPVs, which is achieved by using high work function metal or metal oxides as a cathode and the low work function metal as an anode. In the normal architecture the low work function cathode would easily get oxidized in the air by oxygen and moisture, thus using a higher work function cathode minimizes this tendency and improves efficiency and stability.
The primary function of a solar cell is the conversion of light energy into electrical energy by means of the photovoltaic effect . [ 16 ] In particular, polymer-fullerene bulk heterojunction solar cells are promising because of their potential in low processing costs and mechanical flexibility in comparison to conventional inorganic solar cells. [ 17 ] [ 18 ] Solution processing potentially allows reductions in manufacturing costs through screen printing , doctor blading , inkjet printing, and spray deposition at low temperatures. [ 19 ] [ 20 ] To overcome the narrow spectral overlap of organic polymer absorption bands, experiments have blended conjugated polymer donors with high electron affinity fullerene derivatives as acceptors to extend the spectral sensitivity. Ternary solar cells are a promising approach to increased efficiency and light harvesting properties of organic photovoltaic cells (OPV). [ 21 ]
The Fullerene based BHJ OPV Devices are expected to possess the following characteristics for successful commercialization: high performance, environmentally friendly, simple fabrication process, high stability, and low cost. However, efficiency and stability are the major challenges faced by PSC for its successful commercialization. [ 22 ] [ 23 ] [ 24 ] Shorter diffusion lengths of excitons in conjugated polymers are limited to a few nanometers (less than 20 nm), which is shorter than the optical absorption path length (~ 100–200 nm), [ 25 ] contributes to lower power conversion efficiencies in PSCs. Another factor that limits device efficiencies is lower charge carrier mobility in the conjugated polymers causing, their recombination before reaching their respective electrodes. [ 26 ] Consequently, the solar cell experiences a significant loss in photo generated current, and hence poor device performance. Poor charge carrier collection at the electrodes due to bad mismatch is also another factor that limits device performance. If there is a mismatch between the anode and cathode with that of donor HOMO or acceptor LUMO, respectively, then no Ohmic contacts would be established, which ultimately, results in poor performance of the solar cell. In most Fullerene based BHJ OPVs, there is a mismatch between the anode and cathode with that of donor HOMO and acceptor LUMO respectively, which poses a great challenge in charge carrier collection in the respective electrodes, and overall device performance.
The stability of PSC device is the most important factor that should be given great attention to realize their commercialization, though there have been limited literature on the stability of PSCs compared to the literature on the efficiency of PSCs. A study on the stability of PSCs helps in understanding how a device degrades during its operation. [ 27 ] Device instability occurs due to a range of complex phenomena that are in play simultaneously. [ 23 ] These degradation factors include mechanical stress, irradiation (time & intensity), water, oxygen, heating, etc. (Fig 2.6), and may affect the active layer, the transport layers, the contacts, and the interface of every layer with the adjacent layers. | https://en.wikipedia.org/wiki/Polymer-fullerene_bulk_heterojunction_solar_cell |
Polymer-protein hybrids are a class of nanostructure composed of protein - polymer conjugates (i.e. complexes composed of one protein attached to one or more polymer chains ). [ 1 ] [ 2 ] The protein component generally gives the advantages of biocompatibility and biodegradability , as many proteins are produced naturally by the body and are therefore well tolerated and metabolized. [ 3 ] Although proteins are used as targeted therapy drugs, the main limitations—the lack of stability and insufficient circulation times still remain. [ 4 ] Therefore, protein-polymer conjugates have been investigated to further enhance pharmacologic behavior and stability. [ 5 ] By adjusting the chemical structure of the protein-polymer conjugates, polymer-protein particles with unique structures and functions, such as stimulus responsiveness, enrichment in specific tissue types, and enzyme activity, [ 6 ] can be synthesized. Polymer-protein particles have been the focus of much research recently because they possess potential uses including bioseparations, imaging, biosensing, gene and drug delivery. [ 7 ]
Attaching a single polymer chain to a specific site away from the active center of the protein has less impact on protein activity compared with random attachments. [ 8 ] [ 9 ] In practice, attaching a single polymer chain can be used to adjust chemical properties of the therapeutic protein. For example, conjugation of a single chain of the hydrophilic polyethylene glycol (PEG) can increase the hydrodynamic radius of the protein conjugate by 5-10 fold. [ 10 ] Attachment to PEG was mainly achieved by covalent conjugation via the grafting to strategy, targeting chemo-selective anchor groups. Other polymers, such as oligosaccharides and polypeptides, offer different properties to the enzymes attached to them.
Researchers conjugated the thermo-responsive polymer poly(N-isopropylacrylamide) (pNIPAm) with the biotin-recognizing protein streptavidin close to its recognition site. [ 11 ] At temperatures above the lower critical solution temperature (LCST), the polymer collapses and blocks the binding site, thus reversibly preventing biotin from binding to streptavidin. By copolymerization with two different thermosensitive polymers poly(sulfobetaine methacrylamide) (pSBAm) and pNIPAm together, researchers can control enzyme activity in a small temperature window. [ 12 ]
((N,N'-dimethylacrylamide)-co-4-phenylazophenyl acrylate) at the active site of endoglycanase creates a photoswitchable protein hybrid. [ 13 ] The resulting hybrid catalyzes the hydrolysis of glycoside when irradiated by 350 nm UV light, but turns inactive under 420 nm visible light depending on the conformation of the conjugated polymer. [ 4 ]
A polymer shell is formed by conjugation of multiple molecules of polymers onto the protein core. The polymer shell can either protect the protein core from unwanted degradation or create desired interactive sites for guest molecules. The first generation of polymer shell protein core structures mainly used of Polyethylene glycol (PEG) chains to increase the hydrodynamic radius and reduce immune response to proteins. [ 14 ] However, the PEG shell can reduce protein activity in the inner core. More advanced designs use biodegradable linkers to achieve programmed release of the protein core in specific tissues. Several therapeutic designs with biodegradable PEG shells are already being developed in vivo . [ 15 ] [ 16 ] [ 17 ]
Direct conjugation of polymers ("grafting to" strategy) can efficiently construct a polymer shell with diverse polymer types, however, it has low polymer density, especially with large polymers. In contrast, "grafting from" strategy allows the formation of a dense and uniform polymer shell. The protein core can also function as a carrier for other therapeutic molecules, such as plasmid DNA. [ 18 ]
Dendrite polymer shells have a high volume to molecular weight ratio compared with traditional polymer shells. Using branched carbohydrates can give unique biological properties while maintaining molecular definition. [ 19 ] [ 20 ]
Although covalent conjugation has been the dominant strategy for constructing polymer-protein hybrids, noncovalent chemistry can add another level of complexity and provides the opportunity to create higher-ordered structures. Specifically, self-assembly by non-covalent interactions is progressing rapidly. [ 21 ] [ 22 ] [ 23 ] Supra molecular self-assembly can create nanoparticles , vesicles/micelles, protein cages, etc. Metal-binding interactions, host-guest, and boronic acid-based chemistries are widely studied as non-covalent conjugation methods to create polymer-protein hybrids. [ 24 ] [ 25 ] [ 26 ]
Streptavidin is a protein purified from the bacterium Streptomyces avidinii , which has a high affinity for biotin. By covalently linking streptavidin and polymers, well defined supramolecular constructs can be created due to the high specificity of
Streptavidin for both biotin and its analogues. [ 27 ]
Building upon the covalent core shell strategy, several polymer–streptavidin systems have been developed for affinity separation, bio-sensors and diagnostic applications due to the robust binding conditions and stability of the protein. [ 28 ]
Streptavidin can be used as a macro-initiator for in situ ATRP , through grafting from strategy, a stoichiometrically well defined polymer-protein conjugate can be synthesized. Polymer streptavidin systems can also be empowered to cross the cellular membrane by conjugating with cell penetrating molecules such as peptides and membrane disturbing polymers.
Polymer streptavidin systems can also be modulated to respond to certain environmental changes such as pH. By incorporating pH responsive poly(propylacrylic acid) (PPAAc) into the system, tumor cell suppressor p53 and cytochrome C can be delivered into cancer cells efficiently. [ 29 ]
For biomolecules that are not hampered by the biotin-streptavidin interaction, iminobiotin, an analogue of biotin, has been applied as a pH-sensitive linker that allows the controlled and reversible assembly and intracellular release of cargo molecules in acidic intracellular compartments. [ 30 ]
Polymer-protein conjugates can also form a higher ordered supramolecular structure via self-assembly of amphiphilic polymers into micelles and microcapsules, which is one of the most promising strategies to generate drug delivery systems. Such systems have the innate advantage of rapid preparation, a high drug loading capacity, ease of surface decoration, and the potential to be stimuli responsive.
Micelles refers to a type of supramolecular structure consisting of amphiphilic molecules self-assemblies, usually hollow centered. Researchers successfully conjugated a diblock copolymer site specifically onto GFP, the resulting amphiphilic polymer-protein conjugate is capable of reversible self-assembly into micelles. [ 31 ]
In addition to retaining the native globular shape of proteins, the polypeptide backbone of denatured proteins can also be utilized to be conjugated with hydrophilic polymer chains to generate higher ordered structure through hydrophobic interactions. For example, nanoconjugates of poly-ethylene glycol(PEG) and denatured bovine serum albumin(BSA) will spontaneously self-assemble into a micellar structure, whose protein core can adsorb high numbers of hydrophobic drugs. [ 32 ]
An efficient way to synthesize protein-polymer hybrid nanoparticles is to take advantage of photoinitiated reversible addition−fragmentation chain transfer (RAFT) polymerization-induced self-assembly(PISA) by using multi-RAFT modified bovine serum albumin (BSA) as a macromolecular chain transfer agent. RAFT mediated growth of the PHPMA chains will graft from the BSA-RAFT, and increase the hydrophobicity of the star BSA−PHPMA conjugates. At the critical aggregation concentration, they form nanoparticles due to the hydrophobic interactions. [ 33 ] The resulting nanoparticles show excellent encapsulation capability for both hydrophobic and hydrophilic molecules, such as cancer drugs and DNA.
A rather easy method to prepare protein-polymer hybrid nanoparticles is nanoprecipitation. Spherical nanoparticles composed of BSA-PMMA with diameters of around 100 nm were obtained and the water insoluble chemotherapeutic drug camptothecin was encapsulated within the hydrophobic core consisting of PMMA. [ 6 ] Such protein-polymer hybrid nanoparticles possess tunable sizes and surface charges, have attractive bio-compatibilities and allow efficient cell uptake. Camptothecin-encapsulated BSA-PMMA nanoparticles revealed enhanced anti-tumor activity both in vitro and in animals.
Beyond the nanoscale, protein-polymer conjugate could also be used as building blocks for constructing more complicated structures such as microcapsules through hydrophobic interactions. By performing pickering emulsion technique to process BSA–pNIPAm nanoconjugates into hollowed microcapsules consisting of a closely packed monolayer of conjugated protein–polymer building blocks (named proteinosomes). [ 34 ] These proteinosomes exhibit protocellular properties such as guest molecule encapsulation, selective permeability, controllable mobilization, gene-directed protein synthesis and membrane-gated internalized enzyme catalysis. [ 35 ]
Based on the above-mentioned method, a multi responsive microcapsule has been synthesized by incorporating photoswitchable spiropyran units and the thermoresponsive monomer N-isopropylacrylamide into the membrane. [ 36 ] Stimuli responsive membrane exhibited advantages in the capture and release of different-molecular-weight products by opening and closing the photoresponsive spiropyran ligands, under body temperature, room temperature, UV, redox.
Another effective way to modulate the permeability of microcapsules was based on a self-sacrificing strategy. By selectively using lysozyme and BSA as building blocks as well as self-sacrificing components, the corresponding pores could be generated in the membrane, and then the permeability of the generated microcapsules could be increased from10 kDa to 22 kDa and then to 71 kDa. By loading FITC-Lys (14 kDa), RBITCdextran (70 kDa) and DNA (90 kDa) into the microcapsules, a programmed release of the encapsulants from low molecular weight to high molecular weight was realized. [ 37 ]
Using similar strategy, pH-sensitive protein-polymer microcapsules were developed. Both doxycycline (DOX) and folic acid were incorporated onto the surface of protein covalently. The very low toxicity of polymer-protein nanoconjugates effectively avoided the high toxicity of DOX, which is expected to not only reduce toxic side effects, but also improve anticancer efficiency in vitro examinations. [ 38 ]
Protein nanocages are natural nanocarriers composed of protein subunits with a porous structure. They benefit from monodispersity, intrinsic high stability for protection of internalized drugs from enzymatic degradation and controllable assembly for cargo loading and release.
However, their application might be blocked by immunogenicity, broad biodistribution and significant function and property variations. The incorporation of polymer chains by performing in situ ATRP on the outer surface of or inside the protein nanocages can be an effective way to mitigate those drawbacks. For example, increased loading density of cargo molecules and enhanced stability of the cage assembly can be obtained via internal ATRP inside the cavity of the virus capsid. [ 39 ]
Beyond virus type particles, large multimeric proteins such as the iron storage protein ferritin have emerged as attractive tools to be used as well-defined nano-containers. Using a grafting from strategy, polymers can be introduced to ferritin in a highly regular fashion for precise spatial control. [ 40 ] These polymer–ferritin constructs exhibited protease resistance, enabling longer retention time within the bloodstream while reducing possible antibody interactions.
Polymer-Protein nanoparticles not only contain the traditional properties of nanoparticles, but also have their own unique properties based on the properties of specific proteins. Because they are proteinaceous, they have high biocompatibility, biodegradability and biofunctionality. [ 35 ] Protein-polymer bioconjugates which is the building block of Polymer-Protein hybrids exhibit a unique array of properties such as: light-switching effects, [ 41 ] [ 42 ] acoustic signal capture, thermal energy transfer, and magnetic signal response. [ 43 ] [ 44 ] [ 45 ] [ 6 ] [ 46 ]
Generally, Polymer-Protein hybrids can be synthesized by interfacial self-assembly of protein–polymer conjugates in emulsions. [ 47 ]
Grafting to approach which is the most common and straightforward methodology refers to directly attaching the synthetic polymers to the target protein. This technique can be engineered for site-specific or random conjugation and, when compared to other conjugation methods, provides simple and thorough characterization of polymer before conjugation. And when using this method, the protein remains unaffected by polymerization methods.
As shown in the figure, a protein is firstly conjugated with the initiator and the polymer chain then grows from the protein core in a controlled manner via living polymerization. Likewise, to the earlier discussed methods, grafting from approach can be designed for site-specific or random attachment.
Not like the grafting from and grafting to approach which can conjugate several polymers onto one protein core, the grafting through approach enables several proteins to connect to one polymer chain due to the multivalent nature of protein.
Thermoresponsive conjugates have been exploited for the subsequent separation of proteins from a complex mixture. This method has been utilized to purify polyclonal antibodies in serum samples. This method of purification is rapid, sensitive, inexpensive and could be used to purify various types of antibodies. [ 48 ]
Thermoresponsive conjugates can also be exploited to mediate bioactivity. One of the utilities of the method is demonstrated temperature control of biotin binding and release. Biotin binding was observed below the LCST, while above the LCST the conjugates aggregated, and the biotin binding affinity was reduced by ~20%. By changing the temperature, the recovery of the biotinylated molecules can be achieved. [ 49 ]
The absorption of proteins for particles in physiological fluids can greatly affect the subsequent medical performance of particles in vivo. Nonspecific protein adsorption can be controlled in vivo by modifying the nanoparticle surface with a non-toxic, biocompatible protein possessing tolerable antigenic properties such as albumin. [ 50 ]
The high recognition ability of proteins can enable high delivery efficiency. Protein-polymer particles have potential to deliver drugs to specific regions of the body using the inherent biorecognition property at the protein interface. [ 51 ] Additionally, in some cases the presentation of specific proteins on nanoparticle surfaces can be useful for aiding passage through impermeable biological barriers. [ 52 ]
Enzyme-catalyzed reactions can be performed at higher temperatures using enzyme-immobilized nanoparticles, in which the presence of multiple proteins at the nanoparticle surface facilitates the retention of water molecules limiting the denaturation of the attached proteins. After modification with poly(amide), protein activity could remain unchanged over 500 min at 50 °C, while the half-life time of the native lipase at 50 °C is only 30 min in aqueous solution. [ 53 ] Immobilized enzymes on nanoparticles can significantly improve the efficiency of enzyme reactions by increasing tolerance to a wider range of experimental conditions without significantly reducing biological activity. Besides, polymer-protein particles are reported to control the activity of proteins [ 54 ] and compartmentalize different enzymes to perform multi-step reactions. [ 55 ]
By immobilizing proteins to polymer nanoparticles or polymer/inorganic hybrid nanoparticles (such as polymer-stabilized iron oxide nanoparticles), proteins or their affinity ligands can be separated from complex solutions by applying magnetic fields or centrifugation. Lipase attached to iron oxide nanoparticles maintained 85% biological activity after 30 reaction and separation cycles. [ 56 ]
As the appropriate target is combined with magnetic nanoparticles, the selected target can be magnetically separated directly from natural biological fluids, [ 57 ] which offers a fast, gentle, extensible, and easy to automate separation technique. The simplicity of magnetic separation has been applied in a number of disciplines, including mineral processing wastewater treatment, molecular biology, cell sorting, and clinical diagnostics. [ 58 ] [ 59 ]
Microcapsules termed protocells prepared by polymer-protein hybrids are the hotspot of the research area recently, enabling various functions such as bioreactors, [ 45 ] cascade system [ 60 ] and multiresponsive membranes, etc. [ 37 ] | https://en.wikipedia.org/wiki/Polymer-protein_hybrid |
The Polymer Battery Experiment ( PBEX ) demonstrates the charging and discharging characteristics of polymer batteries in the space environment. PBEX validates use of lightweight, flexible battery technology to decrease cost and weight for future military and commercial space systems. PBEX was developed by Johns Hopkins University and is one of four On Orbit Mission Control (OOMC) packages on PicoSat 9 : [ 1 ]
Batteries in space
This article incorporates public domain material from websites or documents of the National Aeronautics and Space Administration .
This article about energy , its collection, its distribution, or its uses is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Polymer_Battery_Experiment |
The Interdisciplinary Research Centre in Polymer Science and Technology is a consortium of research groups, formed in 1989 from the Universities of Durham , Leeds and Bradford , all of whom are involved research in polymer science and technology. [ 1 ] The University of Sheffield joined in 2004. The Polymer IRC has complementary expertise in polymer chemistry , physics and processing with research programmes covering a wide range of multi-disciplinary polymer science and technology.
Research programmes in the Polymer IRC are funded by grants from government bodies, in particular the Engineering and Physical Sciences Research Council and industry .
From 2011 the Polymer IRC has been centred at the extensive polymer engineering laboratories at the University of Bradford, Polymer IRC with further interdisciplinary research in polymers with pharmaceuticals, and, since 2015, materials chemistry. From 2009, a substantial Science Bridges China programme in Advanced Materials for Healthcare with leading Chinese universities began, which has led to three joint international research laboratories and many early career researcher exchanges Science Bridges China.
This article about a chemistry organization is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Polymer_Interdisciplinary_Research_Centre |
Adsorption is the adhesion of ions or molecules onto the surface of another phase. [ 1 ] Adsorption may occur via physisorption and chemisorption . Ions and molecules can adsorb to many types of surfaces including polymer surfaces. A polymer is a large molecule composed of repeating subunits bound together by covalent bonds . In dilute solution, polymers form globule structures. When a polymer adsorbs to a surface that it interacts favorably with, the globule is essentially squashed, and the polymer has a pancake structure. [ 2 ]
Polymer surfaces differ from non-polymer surfaces in that the subunits that make up the surface are covalently bonded to one another. Non-polymer surfaces can be bound by ionic bonds , metallic bonds or intermolecular forces (IMFs) . In a two component system, non-polymer surfaces form when a positive net amount of energy is required to break self-interactions and form non-self-interactions. Therefore, the energy of mixing (Δ mix G) is positive. This amount of energy, as described by interfacial tension, varies for different combinations of materials. However, with polymer surfaces, the subunits are covalently bonded together and the bulk phase of the solid surface does not allow for surface tension to be measured directly. [ 3 ] The intermolecular forces between the large polymer molecules are difficult to calculate and cannot be determined as easily as non-polymer surface molecular interactions. [ 3 ] The covalently bonded subunits form a surface with differing properties as compared to non-polymer surfaces. Some examples of polymer surfaces include: polyvinyl chloride (PVC) , nylon , polyethylene (PE) , and polypropylene (PP) . Polymer surfaces have been analyzed using a variety of techniques, including: scanning electron microscopy, scanning tunneling microscopy, and infrared spectroscopy. [ 3 ]
The adsorption process can be characterized by determining what amount of the ions or molecules are adsorbed to the surface. This amount can be determined experimentally by the construction of an adsorption isotherm. An adsorption isotherm is a graph of Γ(P,T) versus partial pressure of the adsorbate(P/P 0 ) for a given constant temperature, where Γ(P,T) is the number of molecules adsorbed per surface area. [ 1 ] As the partial pressure of the adsorbate increases, the number of molecules per area also increases.
Contact angle , the angle at which a liquid droplet meets at a solid surface, is another way to characterize polymer surfaces. Contact angle (θ) is a measure of the wetting ability of the liquid on a solid surface. [ 4 ] Generally, due to low surface energy, liquids will not wet polymer surfaces and the contact angle will be greater than 90°. [ 3 ] The liquid molecules are more attracted to other liquid molecules as compared to the polymer surface. Because the polymer surfaces are solid surfaces, surface tension cannot be measured in a traditional way such as using a Wilhelmy plate . Instead, contact angles can be used to indirectly estimate the surface tension of polymer surfaces. [ 3 ] This is accomplished by measuring the contact angles of a series of liquids on a polymer surface. A Fox and Zisman plot of cos θ versus surface tensions of the liquids(γ L ) gives a straight line which can be extrapolated back to determine the critical surface tension of the solid (γ c ). [ 3 ]
cos θ = 1 − β ( γ L − γ c ) {\displaystyle \cos \theta =1-\beta (\gamma _{L}-\gamma _{c})\ }
Where:
The variable β was previously determined to be approximately 0.03 to 0.04. [ 3 ] While the actual surface tension of the solid polymer surface cannot be determined, the Fox and Zisman plot serves as an estimate. However, this estimate may be skewed if there are significant intermolecular forces between the surface and the liquid. Also, this plot is not applicable for binary mixtures of liquids dropped onto a polymer surface. Some estimated surface tensions of different polymers and the contact angles of different liquids on polymer surfaces is shown below. [ 5 ] [ 6 ]
Different polymer surfaces have different side chains on their monomers that can become charged due to the adsorption or dissociation of adsorbates. For example, polystyrene sulfonate has monomers containing negatively charged side chains which can adsorb positively charged adsorbates. Polystyrene sulfonate will adsorb more positively charged adsorbate than negatively charged. Conversely, for a polymer that contains positively charged side chains, such as poly(diallyldimethylammonium chloride) , negatively charged adsorbates will be strongly attracted.
Because the ability of a surface to adsorb molecules onto its surface depends on energies of interaction, thermodynamics of adsorption can be used to understand the driving forces for adsorption. To measure the thermodynamics of polymer surfaces, contact angles are often used to easily obtain useful information. The thermodynamic description of contact angles of a drop of liquid on a solid surface are derived from the equilibrium formed between the chemical potentials of the solid–liquid, solid–vapor, and liquid–vapor interfaces.
At equilibrium, the contact angle of a liquid drop on a surface does not change. Therefore, the Gibbs free energy is equal to 0:
The chemical potentials of the three interfaces must cancel out, producing Young's equation for the relationship between surface energies and contact angles: [ 8 ]
where:
However, this equation cannot be used to determine the surface energy of a solid surface by itself. It can be used in conjunction with the following equation to determine the relationship between contact angle and surface energy of the solid, as surface tension ≈ surface energy for a solid: [ 1 ]
where
Using these two equations, the surface energy of a solid can be determined simply by measuring the contact angle of two different liquids of known surface tension on that solid's surface. [ 8 ]
For heterogeneous surfaces (consisting of two or more different types of material), the contact angle of a drop of liquid at each point along the three phase contact line with a solid surface is a result of the surface tension of the surface at that point. For example, if the heterogeneous regions of the surface form very large domains, and the drop exists entirely within a homogeneous domain, then it will have a contact angle corresponding to the surface tension of that homogeneous region.
Likewise, a drop that straddles two domains of differing surface tensions will have different contact angles along the three phase contact line corresponding to the different surface tensions at each point.
However, with sufficiently small domains (such as in those of a block copolymer), the observed surface energy of the surface approaches the weighted average of the surface energies of each of the constituents of the surface: [ 8 ]
where:
This occurs because as the size of the homogeneous domains become very small compared to the size of the drop, the differences in contact angles along different homogeneous regions becomes indistinguishable from the average of the contact angles. [ 8 ]
The observed contact angle is given by the following formula: [ 8 ]
where:
If the polymer is made out of only two different monomers, it is possible use the above equation to determine the composition of the polymer simply by measuring the contact angle of a drop of liquid placed on it: [ 8 ] [ 9 ]
where:
One of the defining features of polymer surfaces and coatings is the chemical regularity of the surface. While many materials can be irregular mixtures of different components, polymer surfaces tend to be chemically uniform, with the same distribution of different functional groups across all areas of the surface. Because of this, adsorption of molecules onto polymer surfaces can be easily modeled by the Langmuir or Frumkin Isotherms. The Langmuir equation states that for the adsorption of a molecule of adsorbate A onto a surface binding site S , a single binding site is used, and each free binding site is equally likely to accept a molecule of adsorbate: [ 1 ]
where:
The equilibrium constant for this reaction is then defined as: [ 1 ]
The equilibrium constant is related to the equilibrium surface coverage θ , which is given by: [ 1 ]
where:
Because many polymers are composed of primarily of hydrocarbon chains with at most slightly polar functional groups, they tend to have low surface energies and thus adsorb rather poorly. While this can be advantageous for some applications, modification of polymer surfaces is crucial for many other applications in which adhering a substrate to its surface is vital for optimal performance. For example, many applications utilize polymers as structural components, but which degrade rapidly when exposed to weather or other sources of wear. [ 10 ] Therefore, coatings must be used which protect the structural layer from damage. However, the poor adhesive properties of nonpolar polymers makes it difficult to adsorb the protective coating onto its surface. These types of problems make the measurement and control of surface energies important to development of useful technologies.
The Gibbs energy of adsorption, Δ G a d {\displaystyle \Delta G_{ad}} , can be determined from the adsorption equilibrium constant: [ 1 ]
Because Δ G a d {\displaystyle \Delta G_{ad}} is negative for a spontaneous process and positive for a nonspontaneous process, it can be used to understand the tendency for different compounds to adsorb to a surface. In addition, it can be divided into a combination of two components: [ 1 ]
which are the Gibbs energies of physisorption and chemisorption , respectively. Many polymer applications, such as those which use polytetrafluoroethylene (PTFE, or Teflon) require the use of a surface with specific physisorption properties toward one type of material, while being firmly adhered in place to a different type of material. Because the physisorption energy is so low for these types of materials, chemisorption is used to form covalent bonds between the polymer coating and the surface of the object (such as a pan) which holds it in place. Because the relative magnitudes of chemisorption processes are generally much greater than magnitudes of physisorption processes, this forms a strong bond between the polymer and the surface it is chemically adhered to, while allowing the polymer to retain its physisorption characteristics toward other materials. [ 10 ]
Experimentally, the enthalpy and entropy of adsorption are often used to fine-tune the adsorption properties of a material. The enthalpy of adsorption can be determined from constant pressure calorimetry: [ 1 ]
where:
From the enthalpy of adsorption, the entropy of adsorption can be calculated:
where:
Together, these are used to understand the driving forces behind adsorption processes.
Protein adsorption influences the interactions that occur at the tissue-implant interface. Protein adsorption can lead to blood clots, the foreign-body response and ultimately the degradation of the device. In order to counter-act the effects of protein adsorption, implants are often coated with a polymer coating to decrease protein adsorption.
Polyethylene glycol (PEG) coatings have been shown to minimize protein adsorption in the body. The PEG coating consists of hydrophilic molecules that are repulsive to protein adsorption. [ 11 ] Proteins consist of hydrophobic molecules and charge sites that want to bind to other hydrophobic molecules and oppositely charged sites. [ 12 ] By applying a thin monolayer coating of PEG, protein adsorption is prevented at the device site. Furthermore, the device's resistance to protein adsorption, fibroblast adhesion and bacteria adhesion are increased. [ 13 ]
The hemocompatability of a medical device is dependent upon surface charge, energy and topography. [ 14 ] Devices that fail to be hemocompatabile run the risk of forming a thrombus, proliferation and compromising the immune system. Polymer coatings are applied to devices to increase their hemocompatability. Chemical cascades lead to the formation of fibrous clots. By choosing to use hydrophilic polymer coatings, protein adsorption decreases and the chance of negative interactions with the blood diminishes as well. One such polymer coating that increases hemocompatability is heparin . Heparin is a polymer coating that interacts with thrombin to prevent coagulation. Heparin has been shown to suppress platelet adhesion, complement activation and protein adsorption. [ 13 ]
Advanced polymer composites are used in the strengthening and rehabilitation of old structures. These advanced composites can be made using many different methods including prepreg, resin , infusion , filament winding and pultrusion . Advanced polymer composites are used in many airplane structures and their largest market is in aerospace and defense.
Fiber-reinforced polymers (FRP) are commonly used by civil engineers in their structures. FRPs respond linear-elastically to axial stress , making them a great material to hold a load. FRPs are usually in a laminate formation with each lamina having unidirectional fibers, typically carbon or glass, embedded within a layer of light polymer matrix material. FRPs have great resistance against environmental exposure and great durability.
Polytetrafluoroethylene (PTFE) is a polymer used in many applications including non-stick coatings, beauty products, and lubricants. PTFE is a hydrophobic molecule composed of carbon and fluorine. Carbon-fluorine bonds cause PTFE to be a low-friction material, conducive in high temperature environments and resistant to stress cracking. [ 15 ] These properties cause PTFE to be non-reactive and used in a wide array of applications. | https://en.wikipedia.org/wiki/Polymer_adsorption |
Polymer architecture in polymer science relates to the way branching leads to a deviation from a strictly linear polymer chain. [ 1 ] Branching may occur randomly or reactions may be designed so that specific architectures are targeted. [ 1 ] It is an important microstructural feature. A polymer's architecture affects many of its physical properties including solution viscosity, melt viscosity, solubility in various solvents, glass transition temperature and the size of individual polymer coils in solution.
Branches can form when the growing end of a polymer molecule reaches either (a) back around onto itself or (b) onto another polymer chain, both of which, via abstraction of a hydrogen, can create a mid-chain growth site.
Branching can be quantified by the branching index .
An effect related to branching is chemical crosslinking - the formation of covalent bonds between chains. Crosslinking tends to increase T g and increase strength and toughness. Among other applications, this process is used to strengthen rubbers in a process known as vulcanization, which is based on crosslinking by sulfur. Car tires, for example, are highly crosslinked in order to reduce the leaking of air out of the tire and to toughen their durability. Eraser rubber, on the other hand, is not crosslinked to allow flaking of the rubber and prevent damage to the paper. Polymerization of pure sulfur at higher temperatures also explains why sulfur becomes more viscous with elevated temperatures in its molten state. [ 2 ]
A polymer molecule with a high degree of crosslinking is referred to as a polymer network. [ 3 ] A sufficiently high crosslink to chain ratio may lead to the formation of a so-called infinite network or gel, in which each chain is connected to at least one other. [ 4 ]
With the continual development of Living polymerization , the synthesis of polymers with specific architectures becomes more and more facile. Architectures such as star polymers , comb polymers, brush polymers , dendronized polymers , dendrimers and Ring polymers are possible. Complex architecture polymers can be synthesized either with the use of specially tailored starting compounds or by first synthesising linear chains which undergo further reactions to become connected together. Knotted polymers consist of multiple intramolecular cyclization units within a single polymer chain. Linear polymers may also fold into topological circuits , formally classified by their contact topology. [ 5 ]
In general, the higher degree of branching, the more compact a polymer chain is.
Branching also affects chain entanglement, the ability of chains to
slide past one another, in turn affecting the bulk physical properties. Long chain branches may increase polymer strength, toughness, and the glass transition temperature (T g ) due to an increase in the number of entanglements per chain. A random and short chain length between branches, on the other hand, may reduce polymer strength due to disruption of the chains' ability to interact with each other or crystallize.
An example of the effect of branching on physical properties can be found in polyethylene. High-density polyethylene (HDPE) has a very low degree of branching, is relatively stiff, and is used in applications such as bullet-proof vests. Low-density polyethylene (LDPE), on the other hand, has significant numbers of both long and short branches, is relatively flexible, and is used in applications such as plastic films.
Dendrimers are a special case of branched polymer where every monomer unit is also a branch point. This tends to reduce intermolecular chain entanglement and crystallization. A related architecture, the dendritic polymer, are not perfectly branched but share similar properties to dendrimers due to their high degree of branching.
The degree of branching that occurs during polymerisation can be influenced by the functionality of the monomers that are used. [ 6 ] For example, in a free radical polymerisation of styrene , addition of divinylbenzene , which has a functionality of 2, will result in the formation of branched polymer. | https://en.wikipedia.org/wiki/Polymer_architecture |
Main chain or Backbone That linear chain to which all other chains, long or short or both, may be regarded as being pendant.
Note : Where two or more chains could equally be considered to be the main chain, that one is selected which leads to the simplest representation of the molecule. [ 1 ]
In polymer science , the polymer chain or simply backbone of a polymer is the main chain of a polymer. Polymers are often classified according to the elements in the main chains. The character of the backbone, i.e. its flexibility, determines the properties of the polymer (such as the glass transition temperature). For example, in polysiloxanes (silicone), the backbone chain is very flexible, which results in a very low glass transition temperature of −123 °C (−189 °F; 150 K). [ 2 ] The polymers with rigid backbones are prone to crystallization (e.g. polythiophenes ) in thin films and in solution . Crystallization in its turn affects the optical properties of the polymers, its optical band gap and electronic levels. [ 3 ]
Common synthetic polymers have main chains composed of carbon, i.e. C-C-C-C.... Examples include polyolefins such as polyethylene ((CH 2 CH 2 ) n ) and many substituted derivative ((CH 2 CH(R)) n ) such as polystyrene (R = C 6 H 5 ), polypropylene (R = CH 3 ), and acrylates (R = CO 2 R').
Other major classes of organic polymers are polyesters and polyamides . They have respectively -C(O)-O- and -C(O)-NH- groups in their backbones in addition to chains of carbon. Major commercial products are polyethyleneterephthalate ("PET"), ((C 6 H 4 CO 2 C 2 H 4 OC(O)) n ) and nylon-6 ((NH(CH 2 ) 5 C(O)) n ).
Siloxanes are a premier example of an inorganic polymer, even though they have extensive organic substituents. Their backbond is composed of alternating silicon and oxygen atoms, i.e. Si-O-Si-O... The silicon atoms bear two substituents, usually methyl as in the case of polydimethylsiloxane . Some uncommon but illustrative inorganic polymers include polythiazyl ((SN)x) with alternating S and N atoms, and polyphosphates ((PO 3 − ) n ).
Major families of biopolymers are polysaccharides (carbohydrates), peptides , and polynucleotides . Many variants of each are known. [ 4 ]
Proteins are characterized by amide linkages (-N(H)-C(O)-) formed by the condensation of amino acids . The sequence of the amino acids in the polypeptide backbone is known as the primary structure of the protein. Like almost all polymers, protein fold and twist, forming into the secondary structure , which is rigidified by hydrogen bonding between the carbonyl oxygens and amide hydrogens in the backbone, i.e. C=O---HN. Further interactions between residues of the individual amino acids form the protein's tertiary structure . For this reason, the primary structure of the amino acids in the polypeptide backbone is the map of the final structure of a protein, and it therefore indicates its biological function. [ 5 ] [ 4 ] Spatial positions of backbone atoms can be reconstructed from the positions of alpha carbons using computational tools for the backbone reconstruction. [ 6 ]
Carbohydrates arise by condensation of monosaccharides such as glucose . The polymers can be classified into oligosaccharides (up to 10 residues) and polysaccharides (up to about 50,000 residues). The backbone chain is characterized by an ether bond between individual monosaccharides. This bond is called the glycosidic linkage . [ 7 ] These backbone chains can be unbranched (containing one linear chain) or branched (containing multiple chains). The glycosidic linkages are designated as alpha or beta depending on the relative stereochemistry of the anomeric (or most oxidized ) carbon. In a Fischer Projection , if the glycosidic linkage is on the same side or face as carbon 6 of a common biological saccharide, the carbohydrate is designated as beta and if the linkage is on the opposite side it is designated as alpha . In a traditional " chair structure " projection, if the linkage is on the same plane (equatorial or axial) as carbon 6 it is designated as beta and on the opposite plane it is designated as alpha . This is exemplified in sucrose (table sugar) which contains a linkage that is alpha to glucose and beta to fructose . Generally, carbohydrates which our bodies break down are alpha -linked (example: glycogen) and those which have structural function are beta -linked (example: cellulose ). [ 4 ] [ 8 ]
Deoxyribonucleic acid (DNA) and ribonucleic acid (RNA) are the main examples of polynucleotides . They arise by condensation of nucleotides. Their backbones form by the condensation of a hydroxy group on a ribose with the phosphate group on another ribose. This linkage is called a phosphodiester bond . The condensation is catalyzed by enzymes called polymerases . DNA and RNA can be millions of nucleotides long thus allowing for the genetic diversity of life. The bases project from the pentose-phosphate polymer backbone and are hydrogen bonded in pairs to their complementary partners (A with T and G with C). This creates a double helix with pentose phosphate backbones on either side, thus forming a secondary structure . [ 9 ] [ 4 ] [ 10 ] | https://en.wikipedia.org/wiki/Polymer_backbone |
Polymer banknotes are banknotes made from a synthetic polymer such as biaxially oriented polypropylene (BOPP) . Such notes incorporate many security features not available in paper banknotes, including the use of metameric inks . [ 1 ] Polymer banknotes last significantly longer than paper notes, causing a decrease in environmental impact and a reduced cost of production and replacement. [ 2 ] Modern polymer banknotes were developed by the Reserve Bank of Australia (RBA) , Commonwealth Scientific and Industrial Research Organisation (CSIRO) and The University of Melbourne . They were first issued as currency in Australia during 1988 (coinciding with Australia's bicentennial year); by 1996, the Australian dollar was switched completely to polymer banknotes. Romania was the first country in Europe to issue a plastic note in 1999 and became the third country after Australia and New Zealand to fully convert to polymer by 2003.
Other currencies that have been switched completely to polymer banknotes include: the Vietnamese đồng (2006) although this is only applied to banknotes with denominations above 5,000 đồng, the Brunei dollar (2006), the Nigerian Naira (2007), the Papua New Guinean kina (2008), the Canadian dollar (2013), the Maldivian rufiyaa (2017), the Mauritanian ouguiya (2017), the Nicaraguan córdoba (2017), the Vanuatu vatu (2017), the Eastern Caribbean dollar (2019), the pound sterling (2021) and the Barbadian dollar (2022). Several countries and regions have introduced polymer banknotes into commemorative or general circulation, including: Nigeria , Cape Verde , Chile , The Gambia , Trinidad and Tobago , Vietnam , Mexico , Taiwan , Singapore , Malaysia , Botswana , São Tomé and Príncipe , North Macedonia , Russia , Solomon Islands , Samoa , Morocco , Albania , Sri Lanka , Hong Kong , Israel , China , Kuwait , Mozambique , Saudi Arabia , Isle of Man , Guatemala , Haiti , Jamaica , Libya , Mauritius , Costa Rica , Honduras , Angola , Namibia , Lebanon , the Philippines , Egypt , the United Arab Emirates , Samoa , Thailand and Bermuda .
In the 1980s, Canadian engineering company AGRA Vadeko and US chemical company US Mobil Chemical Company developed a polymer substrate trademarked as DuraNote. It had been tested by the Bank of Canada in the 1980s and 1990s; test C$ 20 and C$ 50 banknotes were auctioned in October 2012. [ 3 ] [ 4 ] It was also tested by the Bureau of Engraving and Printing of the United States Department of the Treasury in 1997 and 1998, when 40,000 test banknotes were printed and evaluated; and was evaluated by the central banks of 28 countries. [ 3 ]
Polymer banknotes usually have three levels of security devices. Primary security devices are easily recognisable by consumers and may include intaglio , metal strips, and the clear areas of the banknote. Secondary security devices are detectable by a machine. Tertiary security devices may only be detectable by the issuing authority when a banknote is returned. [ 5 ]
Modern polymer banknotes were first developed by the Reserve Bank of Australia (RBA) and the Commonwealth Scientific and Industrial Research Organisation or CSIRO and first issued as currency in Australia during 1988, to coincide with Australia's bicentennial year. [ 6 ]
In August 2012, Nigeria's Central Bank attempted the switch back from polymer to paper banknotes, [ 7 ] saying there were "significant difficulties associated with the processing and destruction of the polymer banknotes" which had "constrained the realisation of the benefits expected from polymer banknotes over paper notes". [ 8 ] However, President Goodluck Jonathan halted the process in September 2012. [ 9 ]
The polymer notes in the Republic of Mauritius are available in values of ₨ 25, ₨ 50, ₨ 500 and ₨ 2,000, Recently in December 2024, the Bank of Mauritius has announced that there will be issues of Rs 100, Rs 200, and Rs 1000 banknotes. The Fiji FJ$ 5 was issued [ 10 ] in April 2013.
In the United Kingdom , the first polymer banknotes were issued by the Northern Bank in Northern Ireland in 2000; these were a special commemorative issue bearing an image of the Space Shuttle . [ Note 1 ] In March 2015, the Clydesdale Bank in Scotland began to issue polymer Sterling £5 notes marking the 125th anniversary of the building of the Forth Bridge . [ 11 ] These were the first polymer notes to enter general circulation in the UK. [ 12 ] The Royal Bank of Scotland followed in 2016 with a new issue of plastic £5 notes illustrated with a picture of author Nan Shepherd . [ 13 ] In September 2016, the Bank of England began to issue £5 polymer notes with a picture of Winston Churchill ; and in 2017 a polymer £10 began replacing its paper equivalent, featuring a picture of the author Jane Austen . A polymer £20 was issued in 2020 with a picture of J.M.W. Turner , and the £50 note was released in 2021, featuring Alan Turing . Although the polymer Bank of England notes are 15% smaller than the older, paper issue, they bear a similar design. [ 14 ] [ 15 ] Some businesses operating in the UK cash industry have opposed the switch to polymer, citing a lack of research into the cost impact of its introduction. [ 16 ] In December 2022, following the death of Queen Elizabeth II , the Bank of England unveiled the design of a new series of banknotes featuring King Charles III . The rest of the design, however, is unchanged, with the exception of a slight alteration in colour. [ 17 ]
In the Philippines, it was proposed in 2009 to shift to the usage of polymer for Philippine peso banknotes. This did not push through due to concern the shift would have over the impact to country's abaca industry. The proposal was revived in 2021 during the COVID-19 pandemic since the polymer banknotes can be sanitized with less damage compared to paper banknotes, as well as other reasons such as durability, lesser average issue cost, and lesser susceptibility to counterfeiting. In April 2022, The Bangko Sentral ng Pilipinas officially released the 1000 peso bill polymer bank note into circulation. [ 18 ] In December 2024, the BSP (Central Bank of the Philippines) has announced that they will be issuing polymer notes in the denominations of 500, 100, and 50 pesos in the first quarter of 2025.
Despite having the updated logo and the updated signature of the current president , there are no plans for a 20 peso polymer note due to it being slowly shifted into becoming a coin. There are also no plans for a 200 peso polymer banknote due to low demand. [ 19 ] | https://en.wikipedia.org/wiki/Polymer_banknote |
In materials science , a polymer blend , or polymer mixture , is a member of a class of materials analogous to metal alloys , in which at least two polymers are blended together to create a new material with different physical properties. [ 1 ]
During the 1940s, '50s and '60s, the commercial development of new monomers for production of new polymers seemed endless. In this period, it was discovered that the development of the new techniques for the modification of the already existing polymers, would be economically viable.
The first technique of modification developed was the polymerization, in other words, the joint polymerization of more than one kind of polymer.
A new polymers modification process, based on a simple mechanical mixture of two polymers first appeared when Thomas Hancock created a mixture of natural rubber with gutta-percha . This process generated a new polymer class called "polymer blends."
Polymer blends can be broadly divided into three categories:
The use of the term polymer alloy for a polymer blend is discouraged, as the former term includes multiphase copolymers but excludes incompatible polymer blends. [ 3 ]
Examples of miscible polymer blends:
Polymer blends can be used as thermoplastic elastomers .
This article about polymer science is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Polymer_blend |
In materials science , a polymer brush is the name given to a surface coating consisting of polymers tethered to a surface. [ 1 ] The brush may be either in a solvated state, where the tethered polymer layer consists of polymer and solvent , or in a melt state, where the tethered chains completely fill up the space available. These polymer layers can be tethered to flat substrates such as silicon wafers, or highly curved substrates such as nanoparticles . Also, polymers can be tethered in high density to another single polymer chain, although this arrangement is normally named a bottle brush . [ 2 ] Additionally, there is a separate class of polyelectrolyte brushes, when the polymer chains themselves carry an electrostatic charge .
The brushes are often characterized by the high density of grafted chains. The limited space then leads to a strong extension of the chains. Brushes can be used to stabilize colloids , reduce friction between surfaces, and to provide lubrication in artificial joints . [ 3 ]
Polymer brushes have been modeled with molecular dynamics , [ 2 ] Monte Carlo methods , [ 4 ] Brownian dynamics simulations, [ 5 ] and molecular theories. [ 6 ]
Polymer molecules within a brush are stretched away from the attachment surface as a result of the fact that they repel each other (steric repulsion or osmotic pressure). More precisely, [ 7 ] they are more elongated near the attachment point and unstretched at the free end, as depicted on the drawing.
More precisely, within the approximation derived by Milner, Witten, Cates, [ 7 ] the average density of all monomers in a given chain is always the same up to a prefactor:
ϕ ( z , ρ ) = ∂ n ∂ z {\displaystyle \phi (z,\rho )={\frac {\partial n}{\partial z}}}
n ( z , ρ ) = 2 N π arcsin ( z ρ ) {\displaystyle n(z,\rho )={\frac {2N}{\pi }}\arcsin \left({\frac {z}{\rho }}\right)}
where ρ {\displaystyle \rho } is the altitude of the end monomer and N {\displaystyle N} the number of monomers per chain.
The averaged density profile ϵ ( ρ ) {\displaystyle \epsilon (\rho )} of the end monomers of all attached chains, convoluted with the above density profile for one chain, determines the density profile of the brush as a whole:
ϕ ( z ) = ∫ z ∞ ∂ n ( z , ρ ) ∂ z ϵ ( ρ ) d ρ {\displaystyle \phi (z)=\int _{z}^{\infty }{\frac {\partial n(z,\rho )}{\partial z}}\,\epsilon (\rho )\,{\rm {d}}\rho }
A dry brush has a uniform monomer density up to some altitude H {\displaystyle H} . One can show [ 8 ] that the corresponding end monomer density profile is given by:
ϵ d r y ( ρ , H ) = ρ / H N a 1 − ρ 2 / H 2 {\displaystyle \epsilon _{\rm {dry}}(\rho ,H)={\frac {\rho /H}{Na{\sqrt {1-\rho ^{2}/H^{2}}}}}}
where a {\displaystyle a} is the monomer size.
The above monomer density profile n ( z , ρ ) {\displaystyle n(z,\rho )} for one single chain minimizes the total elastic energy of the brush,
U = ∫ 0 ∞ ϵ ( ρ ) d ρ ∫ 0 N d n k T 2 N a 2 ( ∂ z ( n , ρ ) ∂ n ) 2 {\displaystyle U=\int _{0}^{\infty }\epsilon (\rho )\,{\rm {d}}\rho \,\int _{0}^{N}\,{\rm {d}}n\,{\frac {kT}{2Na^{2}}}\left({\frac {\partial z(n,\rho )}{\partial n}}\right)^{2}}
regardless of the end monomer density profile ϵ ( ρ ) {\displaystyle \epsilon (\rho )} , as shown in. [ 9 ] [ 10 ]
As a consequence, [ 10 ] the structure of any brush can be derived from the brush density profile ϕ ( z ) {\displaystyle \phi (z)} . Indeed, the free end distribution is simply a convolution of the density profile with the free end distribution of a dry brush:
ϵ ( ρ ) = ∫ ρ ∞ − d ϕ ( H ) d H ϵ d r y ( ρ , H ) {\displaystyle \epsilon (\rho )=\int _{\rho }^{\infty }-{\frac {{\rm {d}}\phi (H)}{{\rm {d}}H}}\epsilon _{\rm {dry}}(\rho ,H)} .
Correspondingly, the brush elastic free energy is given by:
F e l k T = π 2 24 N 2 a 5 ∫ 0 ∞ { − z 3 d ϕ ( z ) d z } d z {\displaystyle {\frac {F_{\rm {el}}}{kT}}={\frac {\pi ^{2}}{24N^{2}a^{5}}}\int _{0}^{\infty }\left\{-z^{3}{\frac {{\rm {d}}\phi (z)}{{\rm {d}}z}}\right\}{\rm {d}}z} .
This method has been used to derive wetting properties of polymer melts on polymer brushes of the same species [ 10 ] and to understand fine interpenetration asymmetries between copolymer lamellae [ 11 ] that may yield very unusual non-centrosymmetric lamellar structures . [ 12 ]
Polymer brushes can be used in Area-selective deposition. [ 13 ] Area-selective deposition is a promising technique for positional self-alignment of materials at a prepatterned surface. | https://en.wikipedia.org/wiki/Polymer_brush |
Polymer characterization is the analytical branch of polymer science .
The discipline is concerned with the characterization of polymeric materials on a variety of levels. The characterization typically has as a goal to improve the performance of the material. As such, many characterization techniques should ideally be linked to the desirable properties of the material such as strength, impermeability, thermal stability, and optical properties. [ 1 ]
Characterization techniques are typically used to determine molecular mass , molecular structure, molecular morphology , thermal properties, and mechanical properties. [ 2 ]
The molecular mass of a polymer differs from typical molecules, in that polymerization reactions produce a distribution of molecular weights and shapes. The distribution of molecular masses can be summarized by the number-average molecular weight, weight-average molecular weight, and polydispersity . Some of the most common methods for determining these parameters are colligative property measurements, static light scattering techniques, viscometry , and size exclusion chromatography .
Gel permeation chromatography , a type of size exclusion chromatography, is an especially useful technique used to directly determine the molecular weight distribution parameters based on the polymer's hydrodynamic volume . Gel permeation chromatography is often used in combination with multi-angle light scattering (MALS), Low-angle laser light scattering (LALLS) and/or viscometry for an absolute determination (i.e., independent of the chromatographic separation details) of the molecular weight distribution as well as the branching ratio and degree of long chain branching of a polymer, provided a suitable solvent can be found. [ 3 ]
Molar mass determination of copolymers is a much more complicated procedure. The complications arise from the effect of solvent on the homopolymers and how this can affect the copolymer morphology. Analysis of copolymers typically requires multiple characterization methods. For instance, copolymers with short chain branching such as linear low-density polyethylene (a copolymer of ethylene and a higher alkene such as hexene or octene) require the use of Analytical Temperature Rising Elution Fractionation (ATREF) techniques. These techniques can reveal how the short chain branches are distributed over the various molecular weights. A more efficient analysis of copolymer molecular mass and composition is possible using GPC combined with a triple-detection system comprising multi-angle light scattering , UV absorption and differential refractometry, if the copolymer is composed of two base polymers that provide different responses to UV and/or refractive index. [ 4 ]
Many of the analytical techniques used to determine the molecular structure of unknown organic compounds are also used in polymer characterization. Spectroscopic techniques such as ultraviolet-visible spectroscopy , infrared spectroscopy , Raman spectroscopy , nuclear magnetic resonance spectroscopy , electron spin resonance spectroscopy , X-ray diffraction , and mass spectrometry are used to identify common functional groups.
Polymer morphology is a microscale property that is largely dictated by the amorphous or crystalline portions of the polymer chains and their influence on each other. Microscopy techniques are especially useful in determining these microscale properties, as the domains created by the polymer morphology are large enough to be viewed using modern microscopy instruments. Some of the most common microscopy techniques used are X-ray diffraction , Transmission Electron Microscopy , Scanning Transmission Electron Microscopy , Scanning Electron Microscopy , and Atomic Force Microscopy .
Polymer morphology on a mesoscale (nanometers to micrometers) is particularly important for the mechanical properties of many materials. Transmission Electron Microscopy in combination with staining techniques, but also Scanning Electron Microscopy , Scanning probe microscopy are important tools to optimize the morphology of materials like polybutadiene - polystyrene polymers and many polymer blends.
X-ray diffraction is generally not as powerful for this class of materials as they are either amorphous or poorly crystallized. The Small-angle scattering like Small-angle X-ray scattering (SAXS) can be used to measure the long periods of semicrystalline polymers.
A true workhorse for polymer characterization is thermal analysis , particularly Differential scanning calorimetry . Changes in the compositional and structural parameters of the material usually affect its melting transitions or glass transitions and these in turn can be linked to many performance parameters. For semicrystalline polymers it is an important method to measure crystallinity. Thermogravimetric analysis can also give an indication of polymer thermal stability and the effects of additives such as flame retardants.
Other thermal analysis techniques are typically combinations of the basic techniques and include differential thermal analysis , thermomechanical analysis , dynamic mechanical thermal analysis, and dielectric thermal analysis .
Dynamic mechanical spectroscopy and dielectric spectroscopy are essentially extensions of thermal analysis that can reveal more subtle transitions with temperature as they affect the complex modulus or the dielectric function of the material.
The characterization of mechanical properties in polymers typically refers to a measure of the strength, elasticity, viscoelasticity , and anisotropy of a polymeric material. The mechanical properties of a polymer are strongly dependent upon the Van der Waals interactions of the polymer chains, and the ability of the chains to elongate and align in the direction of the applied force. Other phenomena, such as the propensity of polymers to form crazes can impact the mechanical properties. Typically, polymeric materials are characterized as elastomers, plastics, or rigid polymers depending on their mechanical properties. [ 5 ]
The tensile strength , yield strength , and Young's modulus are measures of strength and elasticity, and are of particular interest for describing the stress-strain properties of polymeric materials. These properties can be measured through tensile testing. [ 6 ] For crystalline or semicrystalline polymers, anisotropy plays a large role in the mechanical properties of the polymer. [ 7 ] The crystallinity of the polymer can be measured through differential scanning calorimetry . [ 8 ] For amorphous and semicrystalline polymers, as stress is applied, the polymer chains are able to disentangle and align. If the stress is applied in the direction of chain alignment, the polymer chains will exhibit a higher yield stress and strength, as the covalent bonds connecting the backbone of the polymer absorb the stress. However, if the stress is applied normal to the direction of chain alignment, the Van der Waals interactions between chains will primarily be responsible for the mechanical properties and thus, the yield stress will decrease. [ 9 ] This would be observable in a stress strain graph found through tensile testing. Sample preparation, including chain orientation within the sample, for tensile tests therefore can play a large role in the observed mechanical properties.
The fracture properties of crystalline and semicrystalline polymers can be evaluated with Charpy impact testing . Charpy tests, which can also be used with alloy systems, are performed by creating a notch in the sample, and then using a pendulum to fracture the sample at the notch. The pendulum’s motion can be used to extrapolate the energy absorbed by the sample to fracture it. Charpy tests can also be used to evaluate the strain rate on the fracture, as measured with changes in the pendulum mass. Typically, only brittle and somewhat ductile polymers are evaluated with Charpy tests. In addition to the fracture energy, the type of break can be visually evaluated, as in whether the break was a total fracture of the sample or whether the sample experienced fracture in only part of the sample, and severely deformed section are still connected. Elastomers are typically not evaluated with Charpy tests due to their high yield strain inhibiting the Charpy test results. [ 10 ]
There are many properties of polymeric materials that influence their mechanical properties. As the degree of polymerization goes up, so does the polymer’s strength, as a longer chains have high Van der Waals interactions and chain entanglement. Long polymers can entangle, which leads to a subsequent increase in bulk modulus. [ 11 ] Crazes are small cracks that form in a polymer matrix, but which are stopped by small defects in the polymer matrix. These defects are typically made up of a second, low modulus polymer that is dispersed throughout the primary phase. The crazes can increase the strength and decrease the brittleness of a polymer by allowing the small cracks to absorb higher stress and strain without leading to fracture. If crazes are allowed to propagate or coalesce, they can lead to cavitation and fracture in the sample. [ 12 ] [ 13 ] Crazes can be seen with transmission electron microscopy and scanning electron microscopy, and are typically engineered into a polymeric material during synthesis. Crosslinking, typically seen in thermoset polymers, can also increase the modulus, yield stress, and yield strength of a polymer. [ 14 ]
Dynamic mechanical analysis is the most common technique used to characterize viscoelastic behavior common in many polymeric systems. [ 15 ] DMA is also another important tool to understand the temperature dependence of polymers’ mechanical behavior. Dynamic mechanical analysis is a characterization technique used to measure storage modulus and glass transition temperature, confirm crosslinking, determine switching temperatures in shape-memory polymers, monitor cures in thermosets, and determine molecular weight. An oscillating force is applied to a polymer sample and the sample’s response is recorded. DMA documents the lag between force applied and deformation recovery in the sample. Viscoelastic samples exhibit a sinusoidal modulus called the dynamic modulus . Both energy recovered and lost are considered during each deformation and described quantitatively by the elastic modulus (E’) and the loss modulus (E’’) respectively. The applied stress and the strain on the sample exhibit a phase difference, ẟ, which is measured over time. A new modulus is calculated each time stress is applied to the material, so DMA is used to study changes in modulus at various temperatures or stress frequencies. [ 16 ]
Other techniques include viscometry , rheometry , and pendulum hardness . | https://en.wikipedia.org/wiki/Polymer_characterization |
Polymer chemistry is a sub-discipline of chemistry that focuses on the structures, chemical synthesis , and chemical and physical properties of polymers and macromolecules . The principles and methods used within polymer chemistry are also applicable through a wide range of other chemistry sub-disciplines like organic chemistry , analytical chemistry , and physical chemistry . Many materials have polymeric structures, from fully inorganic metals and ceramics to DNA and other biological molecules . However, polymer chemistry is typically related to synthetic and organic compositions . Synthetic polymers are ubiquitous in commercial materials and products in everyday use, such as plastics , and rubbers , and are major components of composite materials. Polymer chemistry can also be included in the broader fields of polymer science or even nanotechnology , both of which can be described as encompassing polymer physics and polymer engineering . [ 1 ] [ 2 ] [ 3 ] [ 4 ]
The work of Henri Braconnot in 1777 and the work of Christian Schönbein in 1846 led to the discovery of nitrocellulose , which, when treated with camphor , produced celluloid . Dissolved in ether or acetone , it becomes collodion , which has been used as a wound dressing since the U.S. Civil War . Cellulose acetate was first prepared in 1865. In years 1834-1844 the properties of rubber ( polyisoprene ) were found to be greatly improved by heating with sulfur , thus founding the vulcanization process.
In 1884 Hilaire de Chardonnet started the first artificial fiber plant based on regenerated cellulose , or viscose rayon , as a substitute for silk , but it was very flammable. [ 5 ] In 1907 Leo Baekeland invented the first polymer made independent of the products of organisms , a thermosetting phenol - formaldehyde resin called Bakelite . Around the same time, Hermann Leuchs reported the synthesis of amino acid N-carboxyanhydrides and their high molecular weight products upon reaction with nucleophiles, but stopped short of referring to these as polymers, possibly due to the strong views espoused by Emil Fischer , his direct supervisor, denying the possibility of any covalent molecule exceeding 6,000 daltons. [ 6 ] Cellophane was invented in 1908 by Jocques Brandenberger who treated sheets of viscose rayon with acid . [ 7 ]
The chemist Hermann Staudinger first proposed that polymers consisted of long chains of atoms held together by covalent bonds , which he called macromolecules . His work expanded the chemical understanding of polymers and was followed by an expansion of the field of polymer chemistry during which such polymeric materials as neoprene, nylon and polyester were invented. Before Staudinger, polymers were thought to be clusters of small molecules ( colloids ), without definite molecular weights , held together by an unknown force . Staudinger received the Nobel Prize in Chemistry in 1953. Wallace Carothers invented the first synthetic rubber called neoprene in 1931, the first polyester , and went on to invent nylon , a true silk replacement, in 1935. Paul Flory was awarded the Nobel Prize in Chemistry in 1974 for his work on polymer random coil configurations in solution in the 1950s. Stephanie Kwolek developed an aramid , or aromatic nylon named Kevlar , patented in 1966. Karl Ziegler and Giulio Natta received a Nobel Prize for their discovery of catalysts for the polymerization of alkenes . Alan J. Heeger , Alan MacDiarmid , and Hideki Shirakawa were awarded the 2000 Nobel Prize in Chemistry for the development of polyacetylene and related conductive polymers. [ 8 ] Polyacetylene itself did not find practical applications, but organic light-emitting diodes (OLEDs) emerged as one application of conducting polymers. [ 9 ]
Teaching and research programs in polymer chemistry were introduced in the 1940s. An Institute for Macromolecular Chemistry was founded in 1940 in Freiburg, Germany under the direction of Staudinger. In America, a Polymer Research Institute (PRI) was established in 1941 by Herman Mark at the Polytechnic Institute of Brooklyn (now Polytechnic Institute of NYU ).
Polymers are high molecular mass compounds formed by polymerization of monomers . They are synthesized by the polymerization process and can be modified by the additive of monomers. The additives of monomers change polymers mechanical property, processability, durability and so on. The simple reactive molecule from which the repeating structural units of a polymer are derived is called a monomer. A polymer can be described in many ways: its degree of polymerisation , molar mass distribution , tacticity , copolymer distribution, the degree of branching , by its end-groups , crosslinks , crystallinity and thermal properties such as its glass transition temperature and melting temperature. Polymers in solution have special characteristics with respect to solubility , viscosity , and gelation . Illustrative of the quantitative aspects of polymer chemistry, particular attention is paid to the number-average and weight-average molecular weights M n {\displaystyle M_{n}} and M w {\displaystyle M_{w}} , respectively.
The formation and properties of polymers have been rationalized by many theories including Scheutjens–Fleer theory , Flory–Huggins solution theory , Cossee–Arlman mechanism , Polymer field theory , Hoffman Nucleation Theory , Flory–Stockmayer theory , and many others.
The study of polymer thermodynamics helps improve the material properties of various polymer-based materials such as polystyrene (styrofoam) and polycarbonate . Common improvements include toughening , improving impact resistance , improving biodegradability , and altering a material's solubility . [ 10 ]
As polymers get longer and their molecular weight increases, their viscosity tend to increase. Thus, the measured viscosity of polymers can provide valuable information about the average length of the polymer, the progress of reactions, and in what ways the polymer branches. [ 11 ]
Polymers can be classified in many ways. Polymers, strictly speaking, comprise most solid matter: minerals (i.e. most of the Earth's crust) are largely polymers, metals are 3-d polymers, organisms, living and dead, are composed largely of polymers and water. Often polymers are classified according to their origin:
Biopolymers are the structural and functional materials that comprise most of the organic matter in organisms. One major class of biopolymers are proteins , which are derived from amino acids . Polysaccharides , such as cellulose , chitin , and starch , are biopolymers derived from sugars. The poly nucleic acids DNA and RNA are derived from phosphorylated sugars with pendant nucleotides that carry genetic information.
Synthetic polymers are the structural materials manifested in plastics , synthetic fibers , paints , building materials , furniture , mechanical parts, and adhesives . Synthetic polymers may be divided into thermoplastic polymers and thermoset plastics . Thermoplastic polymers include polyethylene , teflon , polystyrene , polypropylene , polyester , polyurethane , Poly(methyl methacrylate) , polyvinyl chloride , nylons , and rayon . Thermoset plastics include vulcanized rubber , bakelite , Kevlar , and polyepoxide . Almost all synthetic polymers are derived from petrochemicals . | https://en.wikipedia.org/wiki/Polymer_chemistry |
Polymer degradation is the reduction in the physical properties of a polymer , such as strength, caused by changes in its chemical composition. Polymers and particularly plastics are subject to degradation at all stages of their product life cycle , including during their initial processing, use, disposal into the environment and recycling. [ 1 ] The rate of this degradation varies significantly; biodegradation can take decades, whereas some industrial processes can completely decompose a polymer in hours.
Technologies have been developed to both inhibit or promote degradation. For instance, polymer stabilizers ensure plastic items are produced with the desired properties, extend their useful lifespans, and facilitate their recycling. Conversely, biodegradable additives accelerate the degradation of plastic waste by improving its biodegradability . Some forms of plastic recycling can involve the complete degradation of a polymer back into monomers or other chemicals.
In general, the effects of heat, light, air and water are the most significant factors in the degradation of plastic polymers. The major chemical changes are oxidation and chain scission , leading to a reduction in the molecular weight and degree of polymerization of the polymer. These changes affect physical properties like strength, malleability , melt flow index , appearance and colour. The changes in properties are often termed "aging".
Plastics exist in huge variety, however several types of commodity polymer dominate global production: polyethylene (PE), polypropylene (PP), polyvinyl chloride (PVC), polyethylene terephthalate (PET, PETE), polystyrene (PS), polycarbonate (PC), and poly(methyl methacrylate) (PMMA). The degradation of these materials is of primary importance as they account for most plastic waste .
These plastics are all thermoplastics and are more susceptible to degradation than equivalent thermosets , as those are more thoroughly cross-linked . The majority (PP, PE, PVC, PS and PMMA) are addition polymers with all-carbon backbones that are more resistant to most types of degradation. PET and PC are condensation polymers which contain carbonyl groups more susceptible to hydrolysis and UV-attack .
Thermoplastic polymers (be they virgin or recycled) must be heated until molten to be formed into their final shapes, with processing temperatures anywhere between 150-320 °C (300–600 °F) depending on the polymer. [ 2 ] Polymers will oxidise under these conditions, but even in the absence of air, these temperatures are sufficient to cause thermal degradation in some materials. The molten polymer also experiences significant shear stress during extrusion and moulding, which is sufficient to snap the polymer chains. Unlike many other forms of degradation, the effects of melt-processing degrades the entire bulk of the polymer, rather than just the surface layers. This degradation introduces chemical weak points into the polymer, particularly in the form of hydroperoxides , which become initiation sites for further degradation during the object's lifetime.
Polymers are often subject to more than one round of melt-processing, which can cumulatively advance degradation. Virgin plastic typically undergoes compounding to introduce additives such as dyes, pigments and stabilisers. Pelletised material prepared in this may also be pre-dried in an oven to remove trace moisture prior to its final melting and moulding into plastic items. Plastic which is recycled by simple re‑melting (mechanical recycling) will usually display more degradation than fresh material and may have poorer properties as a result. [ 3 ]
Although oxygen levels inside processing equipment are usually low, it cannot be fully excluded and thermal-oxidation will usually take place more readily than degradation that is exclusively thermal (i.e. without air). [ 4 ] Reactions follow the general autoxidation mechanism, leading to the formation of organic peroxides and carbonyls. The addition of antioxidants may inhibit such processes.
Heating polymers to a sufficiently high temperature can cause damaging chemical changes, even in the absence of oxygen. This usually starts with chain scission , generating free radicals , which primarily engage in disproportionation and crosslinking . PVC is the most thermally sensitive common polymer, with major degradation occurring from ~250 °C (480 °F) onwards; [ 5 ] other polymers degrade at higher temperatures. [ 6 ]
Molten polymers are non-Newtonian fluids with high viscosities, and the interaction between their thermal and mechanical degradation can be complex. At low temperatures, the polymer-melt is more viscous and more prone to mechanical degradation via shear stress . At higher temperatures, the viscosity is reduced, but thermal degradation is increased. Friction at points of high sheer can also cause localised heating, leading to additional thermal degradation.
Mechanical degradation can be reduced by the addition of lubricants, also referred to as processing aids or flow aids. These can reduce friction against the processing machinery but also between polymer chains, resulting in a decrease in melt-viscosity. Common agents are high-molecular-weight waxes ( paraffin wax , wax esters , etc.) or metal stearates (i.e. zinc stearate ).
Most plastic items, like packaging materials, are used briefly and only once. These rarely experience polymer degradation during their service-lives. Other items experience only gradual degradation from the natural environment. Some plastic items, however, can experience long service-lives in aggressive environments, particularly those where they are subject to prolonged heat or chemical attack. Polymer degradation can be significant in these cases and, in practice, is often only held back by the use of advanced polymer stabilizers . Degradation arising from the effects of heat, light, air and water is the most common, but other means of degradation exist.
The in-service degradation of mechanical properties is an important aspect which limits the applications of these materials. Polymer degradation caused by in-service degradation can cause life threatening accidents. In 1996, a baby was fed via a Hickman line and suffered an infection, when new connectors were used by a hospital. The reason behind this infection was the cracking and erosion of the pipes from the inner side due to contact with liquid media. [ 7 ]
Drinking water which has been chlorinated to kill microbes may contain trace levels of chlorine. The World Health Organization recommends an upper limit of 5 ppm . [ 8 ] Although low, 5 ppm is enough to slowly attack certain types of plastic, particularly when the water is heated, as it is for washing.
Polyethylene, [ 9 ] [ 10 ] polybutylene [ 11 ] and acetal resin (polyoxymethylene) [ 12 ] pipework and fittings are all susceptible. Attack leads to hardening of pipework, which can leave it brittle and more susceptible to mechanical failure .
Plastics are used extensively in the manufacture of electrical items, such as circuit boards and electrical cables . These applications can be harsh, exposing the plastic to a mixture of thermal, chemical and electrochemical attack. Many electric items like transformers , microprocessors or high-voltage cables operate at elevated temperatures for years, or even decades, resulting in low-level but continuous thermal oxidation. This can be exacerbated by direct contact with metals, which can promote the formation of free-radicals, for instance, by the action of Fenton reactions on hydroperoxides. [ 13 ] High voltage loads can also damage insulating materials such as dielectrics , which degrade via electrical treeing caused by prolonged electrical field stress. [ 14 ] [ 15 ]
Polymer degradation by galvanic action was first described in the technical literature in 1990 by Michael C. Faudree, an employee at General Dynamics, Fort Worth Division. [ 16 ] [ 17 ] The phenomenon has been referred to as the "Faudree Effect", [ 18 ] and can possibly be used as a sustainable process to degrade non-recyclable thermoset plastics, and also has had implications for preventing corrosion on aircraft for safety such as changes in design. [ 19 ] [ 20 ] When carbon-fiber-reinforced polymer is attached to a metal surface, the carbon fiber can act as a cathode if exposed to water or sufficient humidity, resulting in galvanic corrosion . This has been seen in engineering when carbon-fiber polymers have been used to reinforce weakened steel structures. [ 21 ] [ 22 ] Reactions have also been seen in aluminium [ 23 ] and magnesium alloys, [ 24 ] polymers affected include bismaleimides (BMI), and polyimides . The mechanism of degradation is believed to involve the electrochemical generation of hydroxide ions, which then cleave the amide bonds. [ 25 ]
Most plastics do not biodegrade readily, [ 26 ] however, they do still degrade in the environment because of the effects of UV-light, oxygen, water and pollutants. This combination is often generalised as polymer weathering . [ 27 ] Chain breaking by weathering causes increasing embrittlement of plastic items, which eventually causes them to break apart. Fragmentation then continues until eventually microplastics are formed. As the particle sizes get smaller, so their combined surface area increases. This facilitates the leaching of additives out of plastic and into the environment. Many controversies associated with plastics actually relate to these additives. [ 28 ] [ 29 ]
Photo-oxidation is the combined action of UV-light and oxygen and is the most significant factor in the weathering of plastics. [ 27 ] Although many polymers do not absorb UV-light, they often contain impurities like hydroperoxide and carbonyl groups introduced during thermal processing, which do. These act as photoinitiators to give complex free radical chain reactions where the mechanisms of autoxidation and photodegradation combine. Photo-oxidation can be held back by light stabilizers such as hindered amine light stabilizers (HALS). [ 30 ]
Polymers with an all-carbon backbone, such as polyolefins , are usually resistant to hydrolysis. Condensation polymers like polyesters , [ 31 ] polyamides , polyurethanes and polycarbonates can be degraded by hydrolysis of their carbonyl groups, to give lower molecular weight molecules. Such reactions are exceedingly slow at ambient temperatures, however, they remain a significant source of degradation for these materials, particularly in the marine environment. [ 32 ] Swelling caused by the absorption of minute amounts of water can also cause environmental stress cracking , which accelerates degradation.
Polymers, which are not fully saturated , are vulnerable to attack by ozone . This gas exists naturally in the atmosphere but is also formed by nitrogen oxides released in vehicle exhaust pollution. Many common elastomers (rubbers) are affected, with natural rubber , polybutadiene , styrene-butadiene rubber and NBR being most sensitive to degradation. The ozonolysis reaction results in immediate chain scission. Ozone cracks in products under tension are always oriented at right angles to the strain axis, so will form around the circumference in a rubber tube bent over. Such cracks are dangerous when they occur in fuel pipes because the cracks will grow from the outside exposed surfaces into the bore of the pipe, and fuel leakage and fire may follow. The problem of ozone cracking can be prevented by adding antiozonants .
The major appeal of biodegradation is that, in theory, the polymer will be completely consumed in the environment without needing complex waste management and that the products of this will be non-toxic.
Most common plastics biodegrade very slowly, sometimes to the extent that they are considered non-biodegradable. [ 26 ] [ 33 ] As polymers are ordinarily too large to be absorbed by microbes, biodegradation initially relies on secreted extracellular enzymes to reduce the polymers to manageable chain-lengths. This requires the polymers bear functional groups the enzymes can 'recognise', such as ester or amide groups. Long-chain polymers with all-carbon backbones like polyolefins, polystyrene and PVC will not degrade by biological action alone [ 34 ] and must first be oxidised to create chemical groups which the enzymes can attack. [ 35 ] [ 36 ]
Oxidation can be caused by melt-processing or weathering in the environment. Oxidation may be intentionally accelerated by the addition of biodegradable additives . These are added to the polymer during compounding to improve the biodegradation of otherwise very resistant plastics. Similarly, biodegradable plastics have been designed which are intrinsically biodegradable, provided they are treated like compost and not just left in a landfill site where degradation is very difficult because of the lack of oxygen and moisture. [ 37 ]
The act of recycling plastic degrades its polymer chains, usually as a result of thermal damage similar to that seen during initial processing. In some cases, this is turned into an advantage by intentionally and completely depolymerising the plastic back into its starting monomers , which can then be used to generate fresh, un-degraded plastic. In theory, this chemical (or feedstock) recycling offers infinite recyclability, but it is also more expensive and can have a higher carbon footprint because of its energy costs. [ 3 ] Mechanical recycling, where the plastic is simply remelted and reformed, is more common, although this usually results in a lower-quality product. Alternatively, plastic may simply be burnt as a fuel in a waste-to-energy process. [ 38 ] [ 39 ]
Thermoplastic polymers like polyolefins can be remelted and reformed into new items. This approach is referred to as mechanical recycling and is usually the simplest and most economical form of recovery. [ 3 ] Post-consumer plastic will usually already bear a degree of degradation. Another round of melt-processing will exacerbate this, with the result being that mechanically recycled plastic will usually have poorer mechanical properties than virgin plastic. [ 40 ] Degradation can be enhanced by high concentrations of hydroperoxides, cross-contamination between different types of plastic and by additives present within the plastic. Technologies developed to enhance the biodegradation of plastic can also conflict with its recycling, with oxo-biodegradable additives, consisting of metallic salts of iron, magnesium, nickel, and cobalt, increasing the rate of thermal degradation. [ 41 ] [ 42 ] Depending on the polymer in question, an amount of virgin material may be added to maintain the quality of the product. [ 43 ]
As polymers approach their ceiling temperature , thermal degradation gives way to complete decomposition. Certain polymers like PTFE , polystyrene and PMMA [ 44 ] undergo depolymerization to give their starting monomers, whereas others like polyethylene undergo pyrolysis , with random chain scission giving a mixture of volatile products. Where monomers are obtained, they can be converted back into new plastic (chemical or feedstock recycling), [ 45 ] [ 46 ] [ 47 ] whereas pyrolysis products are used as a type of synthetic fuel (energy recycling). [ 48 ] In practice, even very efficient depolymerisation to monomers tends to see some competitive pyrolysis. Thermoset polymers may also be converted in this way, for instance, in tyre recycling .
Condensation polymers baring cleavable groups such as esters and amides can also be completely depolymerised by hydrolysis or solvolysis . This can be a purely chemical process but may also be promoted by enzymes. [ 49 ] Such technologies are less well developed than those of thermal depolymerisation, but have the potential for lower energy costs. Thus far, polyethylene terephthalate has been the most heavily studied polymer. [ 50 ] Alternatively, waste plastic may be converted into other valuable chemicals (not necessarily monomers) by microbial action. [ 51 ] [ 52 ]
Hindered amine light stabilizers (HALS) stabilise against weathering by scavenging free radicals that are produced by photo-oxidation of the polymer matrix. UV-absorbers stabilise against weathering by absorbing ultraviolet light and converting it into heat. Antioxidants stabilise the polymer by terminating the chain reaction because of the absorption of UV light from sunlight. The chain reaction initiated by photo-oxidation leads to cessation of crosslinking of the polymers and degradation of the property of polymers. Antioxidants are used to protect from thermal degradation.
Degradation can be detected before serious cracks are seen in a product using infrared spectroscopy . [ 53 ] In particular, peroxy-species and carbonyl groups formed by photo-oxidation have distinct absorption bands. | https://en.wikipedia.org/wiki/Polymer_degradation |
Polymer devolatilization , also known as polymer degassing, is the process of removing low-molecular-weight components such as residual monomers, solvents, reaction by-products and water from polymers. [ 1 ] : 1–12
When exiting a reactor after a polymerization reaction, many polymers still contain undesired low-molecular weight components. These component may make the product unusable for further processing (for example, a polymer solution cannot directly be used for plastics processing), may be toxic , may cause bad sensory properties such as an unpleasant smell or worsen the properties of the polymer. It may also be desirable to recycle monomers and solvents to the process. [ 1 ] : 1–12 Plastic recycling can also involve removal of water [ 2 ] [ 3 ] and volatile degradation products.
Devolatilization can be carried out when a polymer is in the solid or liquid phase, with the volatile components going into a liquid or gas phase. Examples are:
It is usual for different types of devolatilization steps to be combined to overcome limitations in the individual steps.
The thermodynamic activity of volatiles needs to be higher in the polymer than in the other phase for them to leave the polymer. [ 7 ] In order to design such a process, the activity needs to be calculated. This is usually done via the Flory–Huggins solution theory . [ 1 ] : 14–34 This effect can be enhanced via higher temperatures or lower partial pressure of the volatile component by applying an inert gas or lower pressure.
In order to be removed from the polymer, the volatile components need to travel to a phase boundary via diffusion . Because of the low diffusion coefficients of volatiles in polymers, this can be the rate-determining step. [ 1 ] : 35–65 [ 8 ] This effect can be enhanced by higher temperatures or by small diffusion lengths due to its higher Fourier number .
Because polymers and polymer solutions often have a very high viscosity, the flow in devolatilizers is laminar , leading to low heat transfer coefficients , which can also be a limiting factor. [ 8 ]
Higher temperatures can also affect the chemical stability of the polymer and thus its use properties. If a polymer's ceiling temperature is exceeded, it will partially revert to its monomers, destroying its usability. [ 1 ] More generally, polymer degradation also occurs during devolatilization, limiting the temperature and residence time available for the process.
There are two basic forms of devolatilization to a vacuum. In foam devolatilization, bubbles inside the polymer solution nucleate and grow, finally bursting and releasing their volatile content to the surroundings. This requires sufficient vapor pressure. [ 1 ] : 67–190 [ 9 ] If possible, this is a very efficient method because the volatiles only need to diffuse a short way. [ 8 ]
Film devolatilization occurs when there is no longer sufficient vapor pressure to generate bubbles, [ 9 ] and requires on sufficient surface area and good mixing. In this case, stripping agent such as nitrogen may be added to the polymer to induce improved mass transfer through bubbles. [ 8 ] [ 10 ]
Devolatilizers for polymer melts are classified as static or moving, also called "still" and "rotating" in the literature.
Static devolatilizers include:
Removal of monomers and solvents from latex and suspensions, for example in the production of synthetic rubber , is usually done via stirred vessels. [ 1 ] : 507–560 | https://en.wikipedia.org/wiki/Polymer_devolatilization |
A polymer electrolyte is a polymer matrix capable of ion conduction . [ 1 ] Much like other types of electrolyte —liquid and solid-state —polymer electrolytes aid in movement of charge between the anode and cathode of a cell. [ 1 ] [ 2 ] [ 3 ] The use of polymers as an electrolyte was first demonstrated using dye-sensitized solar cells . [ 4 ] The field has expanded since and is now primarily focused on the development of polymer electrolytes with applications in batteries , fuel cells, and membranes . [ 4 ] [ 5 ] [ 6 ]
Generally, polymer electrolytes comprise a polymer which incorporates a highly polar motif capable of electron donation . [ 2 ] Performance parameters impact selection of homo- or heterogenous electrolyte. [ 1 ] [ 2 ] There exist four major types of polymer electrolyte: (1) gel polymer electrolyte, (2) solid-state polymer electrolyte, (3) plasticized polymer electrolyte, and (4) composite polymer electrolyte. [ 1 ] [ 2 ] The degree of crystallinity of a polymer electrolyte matrix impacts ion mobility and the transport rate. Amorphous regions promote greater percolation of charge in gel and plasticized polymer electrolytes. [ 2 ] [ 3 ] [ 7 ] Crystal defects promote weaker chain-ion interactions.
Another key parameter of transport is the temperature dependence of polymer morphology on transport mechanisms by the glass transition temperature. [ 1 ] [ 10 ] These electrolytes differ from one another in their processing methods and applications where they are to be used. Their properties and morphology can be tuned to that desired of the application they are intended for. A shared structural feature of these polymers is the presence of a heteroatom , namely nitrogen or oxygen , although sulfur has also been demonstrated. [ 1 ] [ 2 ] [ 10 ]
Many of these polymers have other applications. The structures of several of these polymers are shown in the adjacent image. Showcases several of these polymers. Other types of polymers capable of ion conduction include polymeric ions, which incorporate either an oxidized (for anion transport) or reduced element of the polymer main chain through a process called chemical doping. [ 10 ] Chemical doping makes these polymers behave as either n-type or p-type semiconductors .
The mechanical strength of a polymer electrolyte is an important parameter for its dendrite suppression capabilities. It is theorized that a polymer electrolyte with a shear modulus twice that of metallic lithium should be able to physically suppress dendrite formation. [ 11 ] High elastic moduli or yield strengths can similarly decrease the uneven lithium deposition that leads to dendrite formation. Higher shear moduli polymer electrolytes have lower ionic conductivity due to their increased stiffness impeding polymer chain mobility and ion movement. [ 12 ] The contrasting relationship between tensile strength and ionic conductivity inspires research into plasticized and composite polymer electrolytes.
Gel polymer electrolytes capture solvent constituents and aid in ion transport across the polymer matrix. The gel supports the polymer scaffold. [ 1 ] [ 3 ] It is noted that amorphous domains of these polymers absorb larger amounts of solvent (and swell accordingly) than do crystalline domains. As a result, ion conduction, which is primarily a diffusion-controlled process, is typically greater across regions of amorphous character than through crystalline domains. The adjacent image illustrates this process. An important aspect of gel electrolytes is the choice of solvent primarily based on their dielectric constants which is noted to impact ion conductivity. [ 3 ] Percolation of charge does occur in highly ordered polymer electrolyte, but the number and proximity of amorphous domains is correlated with increased percolation of charge. [ 3 ]
Gel polymer electrolytes using poly(ethylene oxide) (PEO) are the most studied due to its compatibility with lithium electrodes. However, the plasticizing of PEO decreases the mechanical strength of these electrolytes. Gel polymer electrolytes that combine PEO with mechanically strong polymers such as poly(vinylidene fluoride) (PVDF) can benefit from improved mechanical strength while maintaining the good electrochemical properties of PEO. [ 13 ] A typical tensile strength for a gel polymer electrolyte is around 0.5 MPa, while typical yield strength and shear strength measurements are around 1 MPa. A typical elastic modulus for a gel polymer electrolyte is 10 MPa, which is two orders of magnitude below that of a typical liquid electrolyte. [ 14 ]
Gel polymer electrolytes also shown specific applications for lithium-ion batteries to replace current organic liquid electrolytes. This type of electrolyte has also been shown to be able to be prepared from renewable and degradable polymers while remaining capable of mitigating current issues at the cathode-electrolyte interface. [ 15 ]
Solid-state polymer electrolyte (also known as solid polymer electrolyte [ 16 ] or solvent-free polymer electrolyte [ 17 ] ) arises from coordination of an inorganic salt to the polymer matrix. Application of a potential results in ion exchange through coordination , decoordination, and recoordination along the polymer. [ 2 ] Performance of the electrochemical cell is influenced by the activity of the salt. The potential between the phases and charge transport through the electrolyte is impacted. [ 1 ] [ 2 ] Solid-state polymer electrolytes have also been employed in processing of gallium nitride wafers by providing a liquid- and radiation-free method of oxidizing the surface of the gallium nitride wafer to enable easier polishing of the wafer than previous methods. [ 18 ]
Recent research has focused on characterizing the dynamics of solid polymer electrolytes (SPEs), including transference number, coordination strength, and conductivity. [ 19 ] In SPEs, cations migrate through the electrolyte medium, driven by the electric field between the positive and negative electrodes. This migration is associated with the formation of polymer–salt complexes and is followed by localized motion of polymer segments, as well as inter- and intra-chain ion hopping between coordinating sites. [ 20 ] Specifically, ion transport in SPEs can be described as a ligand exchange process within the coordination structure of the cations. [ 21 ] Consequently, the coordination structure has a significant impact on the cation’s contribution to the total conductivity.
Plasticized polymer electrolyte is a polymer matrix with incorporated plasticizers that enhance their ion conductivity by weakening intra- and interchain interactions that compete with ion-polymer interactions. [ 2 ] A similar phenomenon to that previously discussed with polymer gel electrolytes is observed with plasticized polymer electrolytes. The addition of plasticizer lowers the glass transition temperature of the polymer and effectively enhances salt dissociation into the polymer matrix which increases the ability of the polymer electrolyte to transport ions. [ 1 ] [ 2 ] One limitation of plasticizer incorporation is the alteration of the polymer's mechanical properties. Reduction in the crystallinity of the polymer weakens its mechanical strength at room temperature. [ 2 ] Plasticizers also modulate properties of polymer electrolytes other than conductivity such as affecting charge/discharge times and enhanced capacity. [ 22 ]
Composite polymer electrolyte is a polymer matrix that incorporates inorganic fillers that are chemically inert, but with a high dielectric constant to enhance ion conductivity by inhibiting the formation of ion pairs in the polymer matrix. [ 2 ] It has been demonstrated that the blending of polymer electrolytes with an inorganic filler affords a composite material with properties exceeding the sum of those of the individual components. [ 2 ] [ 5 ] In particular, ion conduction in polymer electrolytes is low (compared to liquid and solid-state electrolytes), but blending with inorganic materials has been shown to enhance the ion mobility and conductivity of the polymer electrolyte. The additional benefit is that the desirable properties of the polymer are maintained, particularly its mechanical strength. [ 2 ]
Ceramic materials such as SiO 2 , Al 2 O 3 , and TiO 2 are popular filler materials that will improve the mechanical properties of the composite electrolyte, increase the lithium-ion transference number, and improve ionic conductivity. The improved conductivity comes from the decreased crystallinity of the material. On their own, these ceramic fillers are brittle and of low dielectric permittivity. Metal-organic framework (MOF) particles can also be used as a filler material with high surface area and high chemical and thermal stability. 2D boron nitride is a potential filler material due to its high mechanical strength arising from modulation of the electrolyte membrane. [ 23 ]
Ion transport mechanisms will primarily focus on that for the transport of cations as the use of cation-conductive polymers is a greater area of academic focus due to the widespread use of lithium-ion batteries and other efforts aimed at developing multivalent metal ion batteries such as magnesium . [ 1 ] [ 10 ] Ion conductivity largely depends on the effective concentration of mobile ions (free ions), electric charge , and ion mobility. Ion mobility is defined as the ability of an ion to move between polar groups along the length of the main chain of a polymer. [ 3 ] [ 10 ]
There exists two transport methods: by chemical potential ( diffusion ) and by electric potential . Ions partition between different phases of the electrolyte, and diffuse based on ionic conductivity, the salt diffusion coefficient of the electrolyte, and the cationic transference number. [ 1 ] Ionic transport is also controlled by the electrical potential gradient across the cell. [ 2 ]
Temperature dependence of electrolyte impacts performance over a range of temperatures. Glass temperature is shown to be the key point of performance. [ 1 ] At or above the glass transition temperature, it is believed chain motions generate a free volume that the ions are able to transport through with aid weak, labile coordination between the ion and the parts of the polymer chain. [ 3 ] In certain applications thin films of polymer electrolytes are needed, which necessitates careful control of morphology and properties due to deviations in the glass transition temperature and other mechanical properties associated with increasingly thin films of amorphous polymer electrolytes. [ 24 ]
Ion transport is impacted by concentration of the counterion and the ability of polymer chains to remain mobile. [ 1 ] It is commonly believed that the greater the ability of a polymer matrix to move, the better the ion conductivity will be; however, this is not well understood as crystalline polymer electrolytes have been shown to be more conductive than an amorphous version of the same electrolyte. It is believed there are multiple modes of ion transport. In crystalline polymer electrolyte, the organization of the chains promotes the formation of interchain "tunnels" in which the ion of interest is able to hop between coordination sites, while the counterion moves along the polymer chain. [ 3 ] These tunnels allows control over anion and cation flow in crystalline polymer electrolytes because they highly ordered crystalline domains are selective to an ion exclude its counter ion allowing for their separation. [ 25 ] This can increase conductivity in crystalline polymer electrolytes. In amorphous polymers that show enhanced conductivity, it is propose that the amorphous character enables greater movement of chains and this increases mobility of ions as their coordination is transient. [ 3 ] [ 4 ] The adjacent image illustrates a possible mechanisms for ion transport through short range chain ordering and motions in amorphous regions of polymer electrolytes.
There are several factors to be optimized in the design of polymer electrolytes such as ion conductivity, mechanical strength, and being chemically inert. [ 1 ] [ 2 ] [ 3 ] These properties are typically characterized using a variety of techniques that exist and are already employed in the characterization of conductive polymers.
Complex impedance spectroscopy , also known as dielectric spectroscopy, enables characterization of the conductivity and permittivity of both heterogeneous and homogenous polymer electrolytes. [ 1 ] The technique is useful for characterizing the electrical properties of bulk material and is capable of differentiating between the electrical properties of the bulk electrolyte and the electrical properties at the interface of the electrolyte with the electrode(s). [ 1 ] [ 2 ] Several important characteristics can be measured including impedance, admittance, modulus, and permittivity (dielectric constant and loss). Complex impedance spectroscopy has also been used to gain insight into how dopants and electrode parameters affect permittivity. Recent research has focused on probing the conducting relaxation of polymer electrolytes based on their conductance and electrode parameters. [ 2 ]
Determination of the glass transition temperature, and methods for characterizing the mechanical properties of polymer electrolytes are also useful. Related to the glass transition are some of the proposed mechanisms for ion conduction. [ 1 ] Other methods of thermal characterization include differential scanning calorimetry , thermogravimetric analysis , and methods used to characterize the specific electronic devices that these materials may be incorporated into. [ 26 ]
Polymer electrolytes are distinct from solid inorganic and liquid electrolytes and offer several advantages including flexibility , processability, robustness, and safety. Conventional inorganic and liquid electrolytes are rigid or fail to perform in situations requiring high strain or bending forces, which can fracture the electrolyte or the vessel containing the electrolyte. Polymers, typically mixed with a plasticizer do not have this problem, which increases their desirability. [ 2 ] [ 3 ] Additionally, the high processability of compatible polymers results in simpler design and construction of the chemical cell. Polymer electrolytes also resist electrode volume changes associated with the charge and discharge of a cell. As a part of this, polymer electrolytes have been demonstrated to better resist the development of destructive dendrites in lithium-ion batteries. [ 1 ] [ 9 ] The shear moduli of polymer electrolytes exceed those of lithium metal, which aid in preventing dendrite growth. Blended polymer electrolytes prepared out of glassy and rubbery polymer have been demonstrates to all but halt dendrite formation, but they are limited by issues with conductivity. [ 27 ] Finally, polymer electrolytes are relatively safe compared to liquid and solid-state batteries. [ 2 ] [ 3 ] Typically, these electrolytes are highly reactive in air and are flammable. Generally, it has been demonstrated that several polymer electrolytes resist degradation in air and resist combustion. [ 2 ] [ 3 ]
Much of the interest in polymer electrolytes stems form their flexibility and enhanced safety over inorganic and liquid electrolytes alternatively used in batteries. [ 1 ] Solid-state and composite electrolytes enable development of solid-state lithium-ion batteries. Dendrite formation is also noted to be limited by polymer electrolytes due to their ability to aid in halting growth of lithium crystals precipitating from the electrolyte. [ 1 ] [ 9 ] The performance of different polymers contributes some polymer electrolytes being better candidates than others for integration into a particular cell. [ 1 ] [ 2 ] [ 3 ] [ 6 ]
Conductive polymer membranes are a growing area of application for polymer electrolytes. These membranes generally require high ionic conductivity, low permeability, thermal and hydrolytic stability, and morphological and mechanical stability. [ 7 ] [ 10 ] An example of membranes made from conductive polymer selective barriers in multifunctional micelles . [ 10 ] Fuel cell applications of polymer electrolytes typically employ perfluorosulfonic acid membranes capable of selective proton conduction from the anode to the cathode. Such fuel cells are able to generate electrical energy from hydrogen or methanol fuels. [ 7 ] However, current conductive polymer membranes are limited by requiring humidification , and the face durability issues related to their mechanical properties. [ 7 ] [ 10 ] The presence of a polymer electrolyte, particularly one that is solid-state enables reduction in device thickness and shorter mass transport distances which contribute to an overall enhanced cell efficiency over devices with other electrolytes. [ 28 ]
Polymer electrolytes have also seen widespread use in capacitors . All-plastic capacitors can also be prepared by sandwiching either a solid-state polymer electrolyte between two plastic electrodes, or through connection electrodes through a polymeric ionic liquid electrolyte. [ 29 ] Blends of polymer electrolytes such as poly(vinyl alcohol) and poly(chitosan) show high capacitance and stability and are an advantageous alternative to capacitors prepared with more resource sensitive materials. [ 30 ] | https://en.wikipedia.org/wiki/Polymer_electrolytes |
Polymer engineering is generally an engineering field that designs, analyses, and modifies polymer materials. Polymer engineering covers aspects of the petrochemical industry , polymerization , structure and characterization of polymers, properties of polymers, compounding and processing of polymers and description of major polymers, structure property relations and applications.
The word “polymer” was introduced by the Swedish chemist J. J. Berzelius. He considered, for example, benzene (C 6 H 6 ) to be a polymer of ethyne (C 2 H 2 ). Later, this definition underwent a subtle modification. [ 1 ]
The history of human use of polymers has been long since the mid-19th century, when it entered the chemical modification of natural polymers. In 1839, Charles Goodyear found a critical advance in the research of rubber vulcanization , which has turned natural rubber into a practical engineering material. [ 2 ] In 1870, J. W. Hyatt uses camphor to plasticize nitrocellulose to make nitrocellulose plastics industrial. 1907 L. Baekeland reported the synthesis of the first thermosetting phenolic resin, which was industrialized in the 1920s, the first synthetic plastic product. [ 3 ] In 1920, H. Standinger proposed that polymers are long-chain molecules that are connected by structural units through common covalent bonds. [ 4 ] This conclusion laid the foundation for the establishment of modern polymer science. Subsequently, Carothers divided the synthetic polymers into two broad categories, namely a polycondensate obtained by a polycondensation reaction and an addition polymer obtained by a polyaddition reaction. In the 1950s, K. Ziegler and G. Natta discovered a coordination polymerization catalyst and pioneered the era of synthesis of stereoregular polymers. In the decades after the establishment of the concept of macromolecules, the synthesis of high polymers has achieved rapid development, and many important polymers have been industrialized one after another.
The basics of division of polymers into thermoplastics , elastomers and thermosets helps define their areas of application.
Thermoplastic refers to a plastic that has heat softening and cooling hardening properties. Most of the plastics we use in our daily lives fall into this category. It becomes soft and even flows when heated, and the cooling becomes hard. This process is reversible and can be repeated. Thermoplastics have relatively low tensile moduli , but also have lower densities and properties such as transparency which make them ideal for consumer products and medical products . They include polyethylene , polypropylene , nylon , acetal resin , polycarbonate and PET , all of which are widely used materials. [ 5 ]
An elastomer generally refers to a material that can be restored to its original state after removal of an external force, whereas a material having elasticity is not necessarily an elastomer. The elastomer is only deformed under weak stress, and the stress can be quickly restored to a polymer material close to the original state and size. Elastomers are polymers which have very low moduli and show reversible extension when strained, a valuable property for vibration absorption and damping. They may either be thermoplastic (in which case they are known as Thermoplastic elastomers ) or crosslinked, as in most conventional rubber products such as tyres . Typical rubbers used conventionally include natural rubber , nitrile rubber , polychloroprene , polybutadiene , styrene-butadiene and fluorinated rubbers.
A thermosetting resin is used as a main component, and a plastic which forms a product is formed by a cross-linking curing process in combination with various necessary additives. It is liquid in the early stage of the manufacturing or molding process, and it is insoluble and infusible after curing, and it cannot be melted or softened again. Common thermosetting plastics are phenolic plastics, epoxy plastics, aminoplasts, unsaturated polyesters, alkyd plastics, and the like. Thermoset plastics and thermoplastics together constitute the two major components of synthetic plastics. Thermosetting plastics are divided into two types: formaldehyde cross-linking type and other cross-linking type.
Thermosets includes phenolic resins , polyesters and epoxy resins , all of which are used widely in composite materials when reinforced with stiff fibers such as fiberglass and aramids . Since crosslinking stabilises the thermoset polymer matrix of these materials, they have physical properties more similar to traditional engineering materials like steel . However, their very much lower densities compared with metals makes them ideal for lightweight structures. In addition, they suffer less from fatigue , so are ideal for safety-critical parts which are stressed regularly in service.
Plastic is a polymer compound which is polymerized by polyaddition polymerization and polycondensation . It is free to change the composition and shape. It is made up of synthetic resins and fillers, plasticizers, stabilizers, lubricants, colorants and other additives. [ 6 ] The main component of plastic is resin . Resin means that the polymer compound has not been added with various additives. The term resin was originally named for the secretion of oil from plants and animals, such as rosin and shellac . Resin accounts for approximately 40% - 100% of the total weight of the plastic. The basic properties of plastics are mainly determined by the nature of the resin, but additives also play an important role. Some plastics are basically made of synthetic resins, with or without additives such as plexiglass , polystyrene , etc. [ 7 ]
Fiber refers to a continuous or discontinuous filament of one substance. Animals and plant fibers play an important role in maintaining tissue. Fibers are widely used and can be woven into good threads, thread ends and hemp ropes. They can also be woven into fibrous layers when making paper or feel. They are also commonly used to make other materials together with other materials to form composites. Therefore, whether it is natural or synthetic fiber filamentous material. In modern life, the application of fiber is ubiquitous, and there are many high-tech products. [ 8 ]
Rubber refers to highly elastic polymer materials and reversible shapes. It is elastic at room temperature and can be deformed with a small external force. After removing the external force, it can return to the original state. Rubber is a completely amorphous polymer with a low glass transition temperature and a large molecular weight, often greater than several hundred thousand. Highly elastic polymer compounds can be classified into natural rubber and synthetic rubber. Natural rubber processing extracts gum rubber and grass rubber from plants; synthetic rubber is polymerized by various monomers. Rubber can be used as elastic, insulating, water-impermeable air-resistant materials.
Commonly used polyethylenes can be classified into low density polyethylene (LDPE), high density polyethylene (HDPE), and linear low density polyethylene (LLDPE). Among them, HDPE has better thermal, electrical and mechanical properties, while LDPE and LLDPE have better flexibility, impact properties and film forming properties. LDPE and LLDPE are mainly used for plastic bags, plastic wraps, bottles, pipes and containers; HDPE is widely used in various fields such as film, pipelines and daily necessities because its resistance to many different solvents. [ 9 ]
Polypropylene is widely used in various applications due to its good chemical resistance and weldability. It has lowest density among commodity plastics. It is commonly used in packaging applications, consumer goods, automatic applications and medical applications. Polypropylene sheets are widely used in industrial sector to produce acid and chemical tanks, sheets, pipes, Returnable Transport Packaging (RTP), etc. because of its properties like high tensile strength, resistance to high temperatures and corrosion resistance. [ 10 ]
Typical uses of composites are monocoque structures for aerospace and automobiles , as well as more mundane products like fishing rods and bicycles . The stealth bomber was the first all-composite aircraft, but many passenger aircraft like the Airbus and the Boeing 787 use an increasing proportion of composites in their fuselages, such as hydrophobic melamine foam . [ 11 ] The quite different physical properties of composites gives designers much greater freedom in shaping parts, which is why composite products often look different from conventional products. On the other hand, some products such as drive shafts , helicopter rotor blades, and propellers look identical to metal precursors owing to the basic functional needs of such components.
Biodegradable polymers are widely used materials for many biomedical and pharmaceutical applications. These polymers are considered very promising for controlled drug delivery devices. Biodegradable polymers also offer great potential for wound management, orthopaedic devices, dental applications and tissue engineering . Not like non biodegradable polymers, they won't require a second step of a removal from body. Biodegradable polymers will break down and are absorbed by the body after they served their purpose. Since 1960, polymers prepared from glycolic acid and lactic acid have found a multitude of uses in the medical industry. Polylactates (PLAs) are popular for drug delivery system due to their fast and adjustable degradation rate. [ 12 ]
Membrane techniques are successfully used in the separation in the liquid and gas systems for years, and the polymeric membranes are used most commonly because they have lower cost to produce and are easy to modify their surface, which make them suitable in different separation processes. Polymers helps in many fields including the application for separation of biological active compounds, proton exchange membranes for fuel cells and membrane contractors for carbon dioxide capture process. | https://en.wikipedia.org/wiki/Polymer_engineering |
Polymers are chainlike molecules that are made of the same repetition unit. With a few exceptions such as proteins , a polymer consists of a mix of molecules with different chain lengths. Therefore, average values are given for the molecular weight like the number average , the weight average or the viscosity average molar mass. A measure for the width of the molecular weight distribution is the polydispersity index . The targeted manipulation of the molecular weight distribution of a polymer by removing short and/or long chain material is called polymer fractionation .
The molecular weight of polymers has a large influence on their properties and therefore determines the applications. Among others the flow behavior, the solubility , the mechanical properties but also the lifetime are influenced by the molecular weight. For high duty polymers – polymers that have to fulfill elevated demands – not only the molecular weight but also the molecular weight distribution is important. This especially holds true if low and/or high molecular material disturbs a given task.
Polymers can be fractionated on an analytical scale by size exclusion chromatography (SEC), Matrix-assisted laser desorption/ionization (MALDI) or field flow fractionation (FFF). These methods are used to determine the molecular weight distribution.
In most cases the fractionation of polymers on a preparative scale is based on chromatographic methods (e.g. preparative SEC or Baker-Williams fractionation ). Therefore, the production is normally limited to few grams only. For large scales of several grams up to kg or even tons the “continuous spin fractionation” can be used. F. Francuskiewicz gives an overview about preparative polymer fractionation. | https://en.wikipedia.org/wiki/Polymer_fractionation |
Polymer fume fever or fluoropolymer fever , also informally called Teflon flu , is an inhalation fever caused by the fumes released when polytetrafluoroethylene (PTFE, known under the trade name Teflon ) reaches temperatures of 300 °C (572 °F) to 450 °C (842 °F). [ 1 ]
When PTFE is heated above 450 °C the pyrolysis products are different and inhalation may cause acute lung injury . [ 2 ] Symptoms are flu-like (chills, headaches and fevers) with chest tightness and mild cough. Onset occurs about 4 to 8 hours after exposure to the pyrolysis products of PTFE. [ 3 ] A high white blood cell count may be seen and chest x-ray findings are usually minimal.
The polymer fumes are especially harmful to certain animals whose breathing , optimized for rapidity, allows in toxins which are excluded by human lungs . Fumes from Teflon in very high heat are fatal to parrots , [ 4 ] as well as some other birds (PTFE toxicosis). [ 5 ] | https://en.wikipedia.org/wiki/Polymer_fume_fever |
Polymer physics is the field of physics that studies polymers , their fluctuations, mechanical properties , as well as the kinetics of reactions involving degradation of polymers and polymerisation of monomers . [ 1 ] [ 2 ] [ 3 ] [ 4 ]
While it focuses on the perspective of condensed matter physics , polymer physics was originally a branch of statistical physics . Polymer physics and polymer chemistry are also related to the field of polymer science , which is considered to be the applicative part of polymers.
Polymers are large molecules and thus are very complicated for solving using a deterministic method. Yet, statistical approaches can yield results and are often pertinent, since large polymers (i.e., polymers with many monomers ) are describable efficiently in the thermodynamic limit of infinitely many monomers (although the actual size is clearly finite).
Thermal fluctuations continuously affect the shape of polymers in liquid solutions, and modeling their effect requires the use of principles from statistical mechanics and dynamics. As a corollary, temperature strongly affects the physical behavior of polymers in solution, causing phase transitions, melts, and so on.
The statistical approach to polymer physics is based on an analogy between polymer behavior and either Brownian motion or another type of a random walk , the self-avoiding walk . The simplest possible polymer model is presented by the ideal chain , corresponding to a simple random walk. Experimental approaches for characterizing polymers are also common, using polymer characterization methods, such as size exclusion chromatography , viscometry , dynamic light scattering , and Automatic Continuous Online Monitoring of Polymerization Reactions (ACOMP) [ 5 ] [ 6 ] for determining the chemical, physical, and material properties of polymers. These experimental methods help the mathematical modeling of polymers and give a better understanding of the properties of polymers.
Models of polymer chains are split into two types: "ideal" models, and "real" models. Ideal chain models assume that there are no interactions between chain monomers. This assumption is valid for certain polymeric systems, where the positive and negative interactions between the monomer effectively cancel out. Ideal chain models provide a good starting point for the investigation of more complex systems and are better suited for equations with more parameters.
Interactions between chain monomers can be modelled as excluded volume . This causes a reduction in the conformational possibilities of the chain, and leads to a self-avoiding random walk. Self-avoiding random walks have different statistics to simple random walks.
The statistics of a single polymer chain depends upon the solubility of the polymer in the solvent. For a solvent in which the polymer is very soluble (a "good" solvent), the chain is more expanded, while for a solvent in which the polymer is insoluble or barely soluble (a "bad" solvent), the chain segments stay close to each other. In the limit of a very bad solvent the polymer chain merely collapses to form a hard sphere, while in a good solvent the chain swells in order to maximize the number of polymer-fluid contacts. [ 12 ] For this case the radius of gyration is approximated using Flory's mean field approach which yields a scaling for the radius of gyration of:
where R g {\displaystyle R_{g}} is the radius of gyration of the polymer, N {\displaystyle N} is the number of bond segments (equal to the degree of polymerization) of the chain and ν {\displaystyle \nu } is the Flory exponent .
For good solvent, ν ≈ 3 / 5 {\displaystyle \nu \approx 3/5} ; for poor solvent, ν = 1 / 3 {\displaystyle \nu =1/3} . Therefore, polymer in good solvent has larger size and behaves like a fractal object. In bad solvent it behaves like a solid sphere.
In the so-called θ {\displaystyle \theta } solvent, ν = 1 / 2 {\displaystyle \nu =1/2} , which is the result of simple random walk. The chain behaves as if it were an ideal chain.
The quality of solvent depends also on temperature. For a flexible polymer, low temperature may correspond to poor quality and high temperature makes the same solvent good. At a particular temperature called theta (θ) temperature, the solvent behaves as an ideal chain .
The ideal chain model assumes that polymer segments can overlap with each other as if the chain were a phantom chain. In reality, two segments cannot occupy the same space at the same time. This interaction between segments is called the excluded volume interaction.
The simplest formulation of excluded volume is the self-avoiding random walk, a random walk that cannot repeat its previous path. A path of this walk of N steps in three dimensions represents a conformation of a polymer with excluded volume interaction. Because of the self-avoiding nature of this model, the number of possible conformations is significantly reduced. The radius of gyration is generally larger than that of the ideal chain.
Whether a polymer is flexible or not depends on the scale of interest. For example, the persistence length of double-stranded DNA is about 50 nm. Looking at length scale smaller than 50 nm, it behaves more or less like a rigid rod. [ 13 ] At length scale much larger than 50 nm, it behaves like a flexible chain.
Reptation is the thermal motion of very long linear, entangled basically macromolecules in polymer melts or concentrated polymer solutions. Derived from the word reptile , reptation suggests the movement of entangled polymer chains as being analogous to snakes slithering through one another. [ 14 ] Pierre-Gilles de Gennes introduced (and named) the concept of reptation into polymer physics in 1971 to explain the dependence of the mobility of a macromolecule on its length. Reptation is used as a mechanism to explain viscous flow in an amorphous polymer. [ 15 ] [ 16 ] Sir Sam Edwards and Masao Doi later refined reptation theory. [ 17 ] [ 18 ] The consistent theory of thermal motion of polymers was given by Vladimir Pokrovskii [ 19 ] . [ 20 ] [ 21 ] Similar phenomena also occur in proteins. [ 22 ]
The study of long chain polymers has been a source of problems within the realms of statistical mechanics since about the 1950s. One of the reasons however that scientists were interested in their study is that the equations governing the behavior of a polymer chain were independent of the chain chemistry. What is more, the governing equation turns out to be a random walk , or diffusive walk, in space. Indeed, the Schrödinger equation is itself a diffusion equation in imaginary time, t' = it .
The first example of a random walk is one in space, whereby a particle undergoes a random motion due to external forces in its surrounding medium. A typical example would be a pollen grain in a beaker of water. If one could somehow "dye" the path the pollen grain has taken, the path observed is defined as a random walk.
Consider a toy problem, of a train moving along a 1D track in the x-direction. Suppose that the train moves either a distance of + b or − b ( b is the same for each step), depending on whether a coin lands heads or tails when flipped. Lets start by considering the statistics of the steps the toy train takes (where S i is the ith step taken):
The second quantity is known as the correlation function . The delta is the kronecker delta which tells us that if the indices i and j are different, then the result is 0, but if i = j then the kronecker delta is 1, so the correlation function returns a value of b 2 . This makes sense, because if i = j then we are considering the same step. Rather trivially then it can be shown that the average displacement of the train on the x-axis is 0;
As stated ⟨ S i ⟩ = 0 {\displaystyle \langle S_{i}\rangle =0} , so the sum is still 0.
It can also be shown, using the same method demonstrated above, to calculate the root mean square value of problem. The result of this calculation is given below
From the diffusion equation it can be shown that the distance a diffusing particle moves in a medium is proportional to the root of the time the system has been diffusing for, where the proportionality constant is the root of the diffusion constant. The above relation, although cosmetically different reveals similar physics, where N is simply the number of steps moved (is loosely connected with time) and b is the characteristic step length. As a consequence we can consider diffusion as a random walk process.
Random walks in space can be thought of as snapshots of the path taken by a random walker in time. One such example is the spatial configuration of long chain polymers.
There are two types of random walk in space: self-avoiding random walks , where the links of the polymer chain interact and do not overlap in space, and pure random walks, where the links of the polymer chain are non-interacting and links are free to lie on top of one another. The former type is most applicable to physical systems, but their solutions are harder to get at from first principles.
By considering a freely jointed, non-interacting polymer chain, the end-to-end vector is
where r i is the vector position of the i -th link in the chain.
As a result of the central limit theorem , if N ≫ 1 then we expect a Gaussian distribution for the end-to-end vector. We can also make statements of the statistics of the links themselves;
Using the statistics of the individual links, it is easily shown that
Notice this last result is the same as that found for random walks in time.
Assuming, as stated, that that distribution of end-to-end vectors for a very large number of identical polymer chains is gaussian, the probability distribution has the following form
What use is this to us? Recall that according to the principle of equally likely a priori probabilities, the number of microstates, Ω, at some physical value is directly proportional to the probability distribution at that physical value, viz ;
where c is an arbitrary proportionality constant. Given our distribution function, there is a maxima corresponding to R = 0 . Physically this amounts to there being more microstates which have an end-to-end vector of 0 than any other microstate. Now by considering
where F is the Helmholtz free energy , and it can be shown that
which has the same form as the potential energy of a spring, obeying Hooke's law .
This result is known as the entropic spring result and amounts to saying that upon stretching a polymer chain you are doing work on the system to drag it away from its (preferred) equilibrium state. An example of this is a common elastic band, composed of long chain (rubber) polymers. By stretching the elastic band you are doing work on the system and the band behaves like a conventional spring, except that unlike the case with a metal spring, all of the work done appears immediately as thermal energy, much as in the thermodynamically similar case of compressing an ideal gas in a piston.
It might at first be astonishing that the work done in stretching the polymer chain can be related entirely to the change in entropy of the system as a result of the stretching. However, this is typical of systems that do not store any energy as potential energy, such as ideal gases. That such systems are entirely driven by entropy changes at a given temperature, can be seen whenever it is the case that are allowed to do work on the surroundings (such as when an elastic band does work on the environment by contracting, or an ideal gas does work on the environment by expanding). Because the free energy change in such cases derives entirely from entropy change rather than internal (potential) energy conversion, in both cases the work done can be drawn entirely from thermal energy in the polymer, with 100% efficiency of conversion of thermal energy to work. In both the ideal gas and the polymer, this is made possible by a material entropy increase from contraction that makes up for the loss of entropy from absorption of the thermal energy, and cooling of the material. | https://en.wikipedia.org/wiki/Polymer_physics |
Polymer scattering experiments are one of the main scientific methods used in chemistry , physics and other sciences to study the characteristics of polymeric systems: solutions , gels, compounds and more. As in most scattering experiments, it involves subjecting a polymeric sample to incident particles (with defined wavelengths), and studying the characteristics of the scattered particles: angular distribution, intensity polarization and so on. This method is quite simple and straightforward, and does not require special manipulations of the samples which may alter their properties, and hence compromise exact results.
As opposed to crystallographic scattering experiments, where the scatterer or "target" has very distinct order, which leads to well defined patterns (presenting Bragg peaks for example), the stochastic nature of polymer configurations and deformations (especially in a solution), gives rise to quite different results.
We consider a polymer as a chain of monomers , each with its position vector R i → {\displaystyle {\vec {R_{i}}}} and scattering amplitude a i {\displaystyle a_{i}} . For simplicity, it is worthwhile considering identical monomers in the chain, such that all a i = a {\displaystyle a_{i}=a} .
An incoming ray (of light / neutrons / X-ray etc.) has a wave vector (or momentum) k → i n c i d e n t {\textstyle {\vec {k}}_{incident}} , and is scattered by the polymer to the vector k → f i n a l {\textstyle {\vec {k}}_{final}} . This enables us to define the scattering vector k → ≡ k → f i n a l − k → i n c i d e n t {\textstyle {\vec {k}}\equiv {\vec {k}}_{final}-{\vec {k}}_{incident}} .
By coherently summing the contributions of all N {\displaystyle N} monomers, we get the scattering intensity from a single polymer , as a function of k → {\textstyle {\vec {k}}} : [ 1 ]
I ( k → ) = | a | 2 ∑ i , j = 1 N e i k → ⋅ ( R → i − R → j ) {\displaystyle I({\vec {k}})=|a|^{2}\sum _{i,j=1}^{N}e^{i{\vec {k}}\cdot ({\vec {R}}_{i}-{\vec {R}}_{j})}}
A dilute solution of a certain polymer has a unique feature: all polymers are considered independent from each other, so that interactions between polymers may be neglected. By illuminating such a solution with a ray of considerable width, a macroscopic number of chain conformations are being sampled simultaneously. In this situation the accessible observables are all ensemble averages , i.e. averages over all possible configurations and deformations of the polymer.
In such a solution, where the polymer density is low (dilute) enough, homogenous and isotropic (on average), intermolecular contributions to the structure factor are averaged out, and only the single-molecule/polymer structure factor is preserved:
S ( k → ) = 1 N 2 ⟨ ∑ i , j = 1 N e i k → ⋅ ( R → i − R → j ) ⟩ {\displaystyle S({\vec {k}})={\frac {1}{N^{2}}}\langle \sum _{i,j=1}^{N}e^{i{\vec {k}}\cdot ({\vec {R}}_{i}-{\vec {R}}_{j})}\rangle }
with ⟨ ⋅ ⟩ {\displaystyle \langle \cdot \rangle } representing the ensemble average. This reduces to the following for an isotropic system (which is typically the case):
S ( k → ) = 1 N 2 ⟨ ∑ i , j = 1 N sin k R i j k R i j ⟩ {\displaystyle S({\vec {k}})={\frac {1}{N^{2}}}\langle \sum _{i,j=1}^{N}{\frac {\sin kR_{ij}}{kR_{ij}}}\rangle }
where two more definitions were made: k ≡ | k → | {\displaystyle k\equiv |{\vec {k}}|} and R i j ≡ | R → i − R → j | {\displaystyle R_{ij}\equiv |{\vec {R}}_{i}-{\vec {R}}_{j}|} .
If the polymers of interest are ideal gaussian chains (or freely-jointed chains) , in the limit of very long chains (allows performing a sort of " continuum transition"), the calculation of the structure can be carried out explicitly and result in a sort of Debye function:
S D ( k → ) = 2 ( k R g ) 4 [ ( k R g ) 2 − 1 + e − ( k R g ) 2 ] {\displaystyle S_{D}({\vec {k}})={\frac {2}{(kR_{g})^{4}}}[(kR_{g})^{2}-1+e^{-(kR_{g})^{2}}]}
With R g {\displaystyle R_{g}} being the polymer's radius of gyration .
in many practical scenarios, the above formula is approximated by the (much more convenient) Lorentzian :
S D ( k → ) ≈ 1 1 + 1 2 ( k R g ) 2 {\displaystyle S_{D}({\vec {k}})\approx {\frac {1}{1+{\frac {1}{2}}(kR_{g})^{2}}}}
which has a relative error of no more than 15% compared to the exact expression. [ 1 ]
The calculation of the structure factor for cases differing from ideal polymer chains can be quite cumbersome, and sometimes impossible to complete analytically. However, when the small-angle scattering condition is met, k R g ≪ 1 {\displaystyle kR_{g}\ll 1} , the sinc term can be expanded so one gets:
S ( k → ) ≈ 1 N 2 ⟨ ∑ i , j = 1 N 1 − 1 6 ( k R i j ) 2 ⟩ {\displaystyle S({\vec {k}})\approx {\frac {1}{N^{2}}}\langle \sum _{i,j=1}^{N}1-{\frac {1}{6}}(kR_{ij})^{2}\rangle }
and by utilising the definition of the radius of gyration:
S ( k → ) ≈ 1 − 1 3 ( k R g ) 2 ≈ e − 1 3 ( k R g ) 2 {\displaystyle S({\vec {k}})\approx 1-{\frac {1}{3}}(kR_{g})^{2}\approx e^{-{\frac {1}{3}}(kR_{g})^{2}}}
where the final transition utilises once again the small-angle approximation .
We can thus approximate the scattering intensity in the small-angle regime as:
log I ( k ) = − 1 3 k 2 R g 2 + c o n s t . {\displaystyle \log I(k)=-{\frac {1}{3}}k^{2}R_{g}^{2}+const.}
and by plotting log I ( k ) {\displaystyle \log I(k)} vs. k 2 {\displaystyle k^{2}} , a so-called "Guinier plot", we may determine the radius of gyration from the slope of this linear curve. This measure is one of many examples of how scattering experiments of polymers can reveal basic properties of those polymer chains.
In order to reap the benefits of working in this small-angle regime, one must take into consideration:
The ratio R g λ {\displaystyle {\frac {R_{g}}{\lambda }}} will determine the available angular spectrum of this regime. To see this one may consider the case of elastic scattering ( even approximately elastic ). If the scattering angle is θ {\displaystyle \theta } , we may express k {\displaystyle k} as:
k = 4 π λ sin θ 2 {\displaystyle k={\frac {4\pi }{\lambda }}\sin {\frac {\theta }{2}}}
so the small-angle condition becomes R g λ sin θ 2 ≪ 1 {\displaystyle {\frac {R_{g}}{\lambda }}\sin {\frac {\theta }{2}}\ll 1} , determining the relevant angles.
- For visible light, λ ∼ 5000 A o {\displaystyle \lambda \sim 5000{\overset {o}{A}}}
- For neutrons, λ ∼ 3 A o {\displaystyle \lambda \sim 3{\overset {o}{A}}}
- For "hard" X-rays, λ ∼ 1 A o {\displaystyle \lambda \sim 1{\overset {o}{A}}}
while typical R g {\displaystyle R_{g}} values for polymers range in 10 − 100 A o {\displaystyle 10-100{\overset {o}{A}}} . This makes small-angle measurements in neutrons and X-rays a bit more tedious, as very small angles are needed, and the data in those angles is often "overpowered" by the θ = 0 {\displaystyle \theta =0} spot emerging in usual scattering experiments. The problem is mitigated by conducting longer experiments with more exposure time, which allows the required data to "intensify". One must take care though, as to not allow the prolonged exposure to high levels of radiation damage the polymers (which might be a real problem when considering biological polymer samples – proteins , for example).
On the other hand, to resolve smaller polymers and structurals subtleties, one cannot always resort to using the long-wavelength rays, as the diffraction limit comes into play.
The main purpose of such scattering experiments involving polymers is to study unique properties of the sample of interest: | https://en.wikipedia.org/wiki/Polymer_scattering |
Polymer science or macromolecular science is a subfield of materials science concerned with polymers , primarily synthetic polymers such as plastics and elastomers . The field of polymer science includes researchers in multiple disciplines including chemistry , physics , and engineering .
This science comprises three main sub-disciplines:
The first modern example of polymer science is Henri Braconnot 's work in the 1830s. Henri, along with Christian Schönbein and others, developed derivatives of the natural polymer cellulose , producing new, semi-synthetic materials, such as celluloid and cellulose acetate . The term "polymer" was coined in 1833 by Jöns Jakob Berzelius , though Berzelius did little that would be considered polymer science in the modern sense. In the 1840s, Friedrich Ludersdorf and Nathaniel Hayward independently discovered that adding sulfur to raw natural rubber ( polyisoprene ) helped prevent the material from becoming sticky. In 1844 Charles Goodyear received a U.S. patent for vulcanizing natural rubber with sulfur and heat. Thomas Hancock had received a patent for the same process in the UK the year before. This process strengthened natural rubber and prevented it from melting with heat without losing flexibility. This made practical products such as waterproofed articles possible. It also facilitated practical manufacture of such rubberized materials. Vulcanized rubber represents the first commercially successful product of polymer research. In 1884 Hilaire de Chardonnet started the first artificial fiber plant based on regenerated cellulose , or viscose rayon , as a substitute for silk , but it was very flammable. [ 2 ] In 1907 Leo Baekeland invented the first synthetic plastic, a thermosetting phenol – formaldehyde resin called Bakelite . [ 3 ]
Despite significant advances in polymer synthesis, the molecular nature of polymers was not understood until the work of Hermann Staudinger in 1922. [ 4 ] Prior to Staudinger's work, polymers were understood in terms of the association theory or aggregate theory, which originated with Thomas Graham in 1861. Graham proposed that cellulose and other polymers were colloids , aggregates of molecules having small molecular mass connected by an unknown intermolecular force. Hermann Staudinger was the first to propose that polymers consisted of long chains of atoms held together by covalent bonds . It took over a decade for Staudinger's work to gain wide acceptance in the scientific community, work for which he was awarded the Nobel Prize in 1953.
The World War II era marked the emergence of a strong commercial polymer industry. The limited or restricted supply of natural materials such as silk and rubber necessitated the increased production of synthetic substitutes, such as nylon [ 5 ] and synthetic rubber . [ 6 ] In the intervening years, the development of advanced polymers such as Kevlar and Teflon have continued to fuel a strong and growing polymer industry.
The growth in industrial applications was mirrored by the establishment of strong academic programs and research institutes. In 1946, Herman Mark established the Polymer Research Institute at Brooklyn Polytechnic , the first research facility in the United States dedicated to polymer research. Mark is also recognized as a pioneer in establishing curriculum and pedagogy for the field of polymer science. [ 7 ] In 1950, the POLY division of the American Chemical Society was formed, and has since grown to the second-largest division in this association with nearly 8,000 members. Fred W. Billmeyer, Jr., a Professor of Analytical Chemistry had once said that "although the scarcity of education in polymer science is slowly diminishing but it is still evident in many areas. What is most unfortunate is that it appears to exist, not because of a lack of awareness but, rather, a lack of interest." [ 8 ]
2005 (Chemistry) Robert Grubbs , Richard Schrock , Yves Chauvin for olefin metathesis. [ 9 ]
2002 (Chemistry) John Bennett Fenn , Koichi Tanaka , and Kurt Wüthrich for the development of methods for identification and structure analyses of biological macromolecules . [ 10 ]
2000 (Chemistry) Alan G. MacDiarmid , Alan J. Heeger , and Hideki Shirakawa for work on conductive polymers , contributing to the advent of molecular electronics . [ 11 ]
1991 (Physics) Pierre-Gilles de Gennes for developing a generalized theory of phase transitions with particular applications to describing ordering and phase transitions in polymers. [ 12 ]
1974 (Chemistry) Paul J. Flory for contributions to theoretical polymer chemistry. [ 13 ]
1963 (Chemistry) Giulio Natta and Karl Ziegler for contributions in polymer synthesis. ( Ziegler-Natta catalysis ). [ 14 ]
1953 (Chemistry) Hermann Staudinger for contributions to the understanding of macromolecular chemistry. [ 15 ] | https://en.wikipedia.org/wiki/Polymer_science |
Polymer soil stabilization refers to the addition of polymers to improve the physical properties of soils, most often for geotechnical engineering , construction, or agricultural projects. [ 1 ] Even at very small concentrations within soils, various polymers have been shown to increase water retention and reduce erosion, increase soil shear strength , and support soil structure. [ 2 ] A wide range of polymers have been used to address problems ranging from the prevention of desertification to the reinforcement of roadbeds . [ 3 ] [ 1 ] [ 4 ]
Polymers that have been tested for soil stabilization effects include a range of synthetic polymers and biopolymers . [ 1 ] [ 5 ] Biopolymers in particular offer a more eco-friendly alternative to traditional chemical additives, such as ordinary cement , which may generate a large amount of carbon dioxide during production or cause lasting environmental damage . [ 1 ] [ 6 ]
Polymers mainly affect the aggregation and strength of soils through their interactions with fine clay particles. Coatings of adsorbed polymers on clays can increase their steric stabilization by preventing clay particles from approaching each other as closely. Alternatively, polymer molecules that bond with multiple clay particles promote flocculation . [ 2 ] Hydrogel networks can result in more indirect strengthening within soils by creating a scaffolding for soil particles. Additional strength can be imparted to polymer networks within soils through chemical cross-linking and curing . [ 1 ] [ 5 ]
Synthetic polymers began replacing other chemical binders for soil stabilization in agriculture in the late 20th century. [ 1 ] Compared to traditional chemical binders, polymer soil additives can achieve the same amount of strengthening at much lower concentrations – for example, mixtures of 0.5-1% of various biopolymers have strength levels that match or exceed those of 10% cement mixtures in soils. [ 1 ] Synthetic polymers, including geopolymers , and biopolymers , have been tested for their beneficial interactions with soils. Methods for introducing polymers into soils include mixing, injecting, spraying, and grouting. [ 1 ] Liquid polymers, sold as concentrated solutions, can be applied deep within the soil through pressure injection or applied directly to uncompacted soil. [ 5 ]
Alumino-silicate based, synthetic geopolymers provide many of the same binding properties as Portland cement . Compared to other polymer additives, many geopolymers are quite durable, with high mechanical strength and thermal stability. They react readily with calcium hydroxide in water, which allows them to act as cementitious binders. Geopolymers offer the advantage of being more environmentally friendly and energy-efficient to produce than traditional chemical additives, and can be synthesized from waste products such as mine tailings or fly ash . [ 7 ] When these waste products are treated with an alkaline reagent, the aluminosilicate rapidly depolymerizes and polycondenses into a rigid three dimensional polymeric structure that coats and strengthens soil pores . [ 8 ] Geopolymers have been applied to stabilize gypseous soils because of their resistance to sulfur and other chemical attacks, which weaken traditional cement. [ 9 ]
Biopolymers are synthesized as a result of biological processes, and are often less harmful to the landscape and its biota because of their natural origins. Of the three types of biopolymers, polysaccharides have proven more useful as soil binders than polynucleotides or polypeptides . Biopolymers that have been tested for use in soil stabilization include cellulose , starch , chitosan , xanthan , curdlan , and beta-glucan . [ 1 ] Some biopolymers are sensitive to water, and wetter soils exhibit weaker biopolymer-clay cohesion. Because of this, when wetted, gel-type biopolymers form hydrogels which have decreased tensile strength but significantly higher compressive strength compared to the original soil. Protein -based biopolymers, though less common, have been used as an alternative to polysaccharides for projects requiring greater water resistance. [ 1 ]
Biopolymers may increasingly replace synthetic polymers for soil stabilization projects. They are more environmentally friendly than many other chemical soil additives, and can achieve the same amount of strengthening at much lower concentrations. Increasing use of biopolymers could offset the carbon dioxide emissions associated with cement production, which can be as high as 1.25 tons of carbon dioxide per ton of cement. [ 1 ]
Polymer treatments modify the size, shape, and cohesion of soil aggregates by changing the interactions between soil particles. Because polymer-soil interactions occur on the surfaces of soil particles, the amount of surface area in the soil (in other words, its dominant particle size ) is of great importance. [ 5 ] Polymers have only weak interactions with the large sand - and silt-sized particles of soil, while they bond directly to finer clays. [ 1 ] Although polymers mainly interact with the clay fraction of soils, they do change the properties of sandy soils to a lesser degree. [ 2 ] Polymer structure dictates how they will interact with clay particles. For example, block copolymers result in very different soil properties than homopolymers , as do ionic and nonionic polymers. Additionally, the mechanisms by which different polymers adsorb onto clay particle surfaces result in different soil properties and responses. [ 2 ]
Polymers on the surfaces of the colloidal fraction of soils promote steric stabilization of those particles by preventing them from approaching each other and aggregating. This effect is seen in a variety of aqueous and nonaqueous environments, and is not affected by electrolytes in solution. [ 2 ] The degree of steric stabilization depends on the amount of clay surface covered by adsorbed polymers, the strength of the polymer bond, the thickness of the polymer layer, and the favorability of the solvent for the polymer loops and tails. Block and graft copolymers, made up of two different homopolymers with differing solubilities in the suspension medium, are most often used for steric stabilization. When synthesized to have alternating regions of hydrophobic and hydrophilic monomers, copolymers can stabilize the suspension because their hydrophobic group adsorbs strongly to the colloid surface while the hydrophilic group is attracted to the solvent. In general, the adsorption of polymers to clay surfaces is entropically favored because one polymer molecule displaces many water molecules which were previously bound to the soil particle. [ 2 ]
Polymer and clay particle suspensions have been used to understand the mechanism of this steric stabilization in soils. Consider a homopolymer adsorbed to the surfaces of clay particles in suspension. As the clay particles approach each other to within two times the thickness of the polymer layers, the loops and tails of the polymers on one surface will start to block those on the other surface, leading to a decrease in configurational entropy . This is unfavorable because it increases the Gibbs free energy of the system, and it will be more energetically favorable for the colloid particles to remain farther apart. [ 2 ]
Overall, the free energy of steric interactions (Δ G s ) can be expressed as a function of both elastic repulsive energy (Δ G el ) and the free energy of mixing (Δ G mix ):
The elastic repulsive energy (Δ G el ), increases as more polymers adsorb to the surfaces of clay particles. This can be modeled as:
where k B is the Boltzmann constant , T is the temperature, Γ is the number of adsorbed polymers per unit surface area, and Ω( h ) and Ω(∞) are the number of available conformations at h and infinite distances. Δ G s due to steric interactions is also a function of the free energy of mixing (Δ G mix ) . Most commonly, this will favor greater distances between polymer molecules in solution. [ 2 ]
Alternatively, under different conditions, polymers can enhance flocculation . Particle aggregates are held together more strongly by polymers than by electrolytes. Such interactions are called bridging flocculation because a single polymer chain is linked to multiple soil particles. Examples of common bridging polymers include polyacrylamide (PAM) and polyethylene oxide . In one study, PAM was found to increase the size of kaolinite flocs in suspension experiments from 10 μm to several millimeters. [ 10 ] The maximum strength benefits of flocculation are achieved when polymers cover a surface area equivalent to half the polymer saturation capacity. [ 2 ] Addition of polymer beyond this point causes the polymer to act as a lubricant, allowing the soil particles to slip across each other. [ 5 ]
Biopolymers have been shown to strengthen soils both by cohesion with clay particles to form polymer-clay matrices and by promoting the aggregation of coarser soil particles with each other within the polymer-clay matrix. The hydroxyl groups on polysaccharide biopolymers allow them to form hydrogen bonds directly with charged clay particles (in dry soils), as well as with soil pore water itself (in moist soil). These interactions are promoted by the high surface area of both the biopolymers themselves and the clay particles they bond with. [ 1 ] When ionized polymers (such as many biopolymers) with the same charge as clay particles adsorb to their surface, they increase the electrical double layer repulsion. [ 2 ]
The strength of polymer chains can be enhanced by cross-linking , which increases the interactions between chains through bonding with another reactant. [ 11 ] The high mechanical strength of soil/polymer mixtures after cross-linking can make many polymers more suited for soil stabilization projects. [ 1 ] Curing time after polymer addition can also affect the strength of the polymer-soil structures formed. [ 12 ] After seven days of curing, the liquid polymer SS299 resulted in soil with two times the compressive strength of untreated soil. Some polymers can also acquire strength much more rapidly during curing than traditional, non-polymeric chemical additives. [ 5 ]
Soil characteristics that have been altered by addition of polymers include compressive strength, volume stability, hydraulic durability, and conductivity. [ 5 ] Polymers can help prevent soil erosion and increase infiltration of water by strengthening soil aggregates and supporting soil structure. The properties of the soil itself are a dominant control on the ability of polymers to interact with it. A study of the cationic, alkaline polymer SS299 (a commercially produced additive) found that the properties of treated soils depend on the plasticity index of the original soil, which reflects its clay content. [ 5 ]
Hydrogel swelling of biopolymers reduces the amount of soil pore space, restricting the flow of water and suiting polymer hydrogels for construction projects seeking to minimize water seepage and support vegetation growth. [ 13 ] Biopolymers can be added to soils along with synthetic polymers to utilize the properties of both polymers. By increasing the water retention and infiltration rates in soils, the addition of biopolymers increases the availability of water for plants. [ 1 ] This is particularly applicable in arid regions like deserts where droughts leave soils susceptible to high rates of erosion during precipitation events. By retaining water, the enhanced soils reduce runoff and its accompanying erosion. [ 3 ] PAM has been widely applied as a soil stabilizer for agriculture, both to retain water in fields and to improve run-off water quality by reducing the amount of sediment entering rivers and streams. [ 14 ] | https://en.wikipedia.org/wiki/Polymer_soil_stabilization |
Polymer solutions are solutions containing dissolved polymers . [ 1 ] These may be liquid solutions (e.g. in aqueous solution ), or solid solutions (e.g. a substance which has been plasticized). [ 2 ]
The introduction into the polymer of small amounts of a solvent ( plasticizer ) reduces the temperature of glass transition , the yield temperature , and the viscosity of a melt . [ 3 ] An understanding of the thermodynamics of a polymer solution is critical to prediction of its behavior in manufacturing processes — for example, its shrinkage or expansion in injection molding processes, or whether pigments and solvents will mix evenly with a polymer in the manufacture of paints and coatings. [ 4 ] A recent theory on the viscosity of polymer solutions gives a physical explanation for various well-known empirical relations and numerical values including the Huggins constant, but reveals also novel simple concentration and molar mass dependence. [ 5 ]
Polymer solutions are used in producing fibers , films , glues , lacquers , paints , and other items made of polymer materials . Thin layers of polymer solution can be used to produce light-emitting devices . [ 6 ] Guar polymer solution gels can be used in hydraulic fracturing ("fracking"). [ 7 ]
This article about polymer science is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Polymer_solution |
Taking clues from spongy toddler toys that can absorb water and inflate to bigger sizes, [ 1 ] scientists at Mayo Clinical Research Centre, Rochester, Minnesota, United States have developed biodegradable polymer grafts that, when surgically placed in damaged vertebrae, intended to grow such that it is just the right size and shape to fix the spinal column. [ 2 ]
For obvious reasons, any problem with the backbone of a vertebrate is often considered a potential disability which can limit a person's ability to manoeuvre their way around their surroundings, cause a lot of pain and be responsible for mental distress . This has been researched upon by Lichun Lu and Xifeng Liu, scientists from Mayo Clinic's college of medicine, who have developed a novel spinal graft that, once surgically placed in the body, will grow to be just the right size and shape to fix the spinal column. They presented their work at the 251st National Meeting & Exposition of the non-profit organization American Chemical Society (ACS). [ 3 ]
Current treatments for spinal tumours have been considered way too expensive and invasive. When cancer metastasizes it predominantly tends to settle in the spinal column. A different approach to replacing harmed vertebrae has been investigated. Polymer sponge researchers were reported to being about to present their work in March 2016 to a meeting of the American Chemical Society (ACS). [ 4 ]
Doctors can cut out the infected bone tissue (or flat-out replace it as they did in the Sydney case) but that leaves large gaps in the spine. Normally, doctors would either have to open the chest cavity and access the spine from far side (which entails a lengthy recovery and high probability of complications) or they'd make a small incision in the neck/back and inject expandable titanium rods into the bone gap (which is super expensive because titanium). This new technique combines the easy access and short recovery of the titanium rod method with the low cost of the open chest operation. [ 2 ] The use of sponges for the treatment of such problems has long been suggested for obvious reasons. [ citation needed ]
Doctors simply cut a small hole in the patient's neck/back and inject a hydrogel polymer into the bone gap much the same way they would a titanium rod. This polymer absorbs fluids from within the wound and grows to fill the gap. Doctors control how far the polymer expands in any specific direction by first inserting a "cage"—basically a pre-expanded shell that the polymer fills in as it spreads. Think of it as the wooden frame that keeps a freshly-poured concrete sidewalk in place until it hardens. Once the polymer fills in the cage, which takes 5 to 10 minutes on average, it will set and harden into a viable prosthetic. From there, surrounding bone tissue grows into and through the polymer, reinforcing and cementing it in place. [ 2 ]
The sponge-like polymer, polycaprolactone (PCL) shows promise as a medical material that can be used to fill gaps in human bones and serve as a scaffold to promote new bone growth. [ citation needed ] Injuries, birth defects (such as cleft lip and palates ), or the removal of tumors in the case of bone cancer can create gaps in bone that are too large to heal naturally. The gaps may dramatically alter a person's phenotypic appearance when they occur in the head, face, or jaw.
While there might be a strong possibility that a transplant is rejected , various complications may be averted by the use of techniques like bone marrow transplantation , blood transfusion , T lymphocyte modification [ 5 ] and the similar techniques.
The scope of polymer sponge in this field is still in its infancy and researchers in the field of biotechnological applications for making the concept available to humans and animals may require more attentive financing. [ 6 ] | https://en.wikipedia.org/wiki/Polymer_sponge |
Polymers and plastics known as polymer substrates are used for banknotes and other everyday products. The banknote is more durable than paper, won't become soaked in liquids and is harder to counterfeit, though not impossible. Countries whose whole banknote production is in polymer are: Australia , Romania , Vietnam , United Kingdom and New Zealand . Other countries that have partial polymer and paper issue include Papua New Guinea , Samoa , Solomon Islands , Mexico , Zambia , Brunei , Malaysia , Singapore , Nigeria , Chile , and Nepal . [ 1 ] The material is also used in commemorative notes in some other countries. The process of polymer substrate creation was developed by the Australia CSIRO .
Countries like Bulgaria have issued a combination of paper and polymer as the 200 Lev banknote. | https://en.wikipedia.org/wiki/Polymer_substrate |
The polymerase chain reaction ( PCR ) is a method widely used to make millions to billions of copies of a specific DNA sample rapidly, allowing scientists to amplify a very small sample of DNA (or a part of it) sufficiently to enable detailed study. PCR was invented in 1983 by American biochemist Kary Mullis at Cetus Corporation . Mullis and biochemist Michael Smith , who had developed other essential ways of manipulating DNA, were jointly awarded the Nobel Prize in Chemistry in 1993. [ 1 ]
PCR is fundamental to many of the procedures used in genetic testing and research, including analysis of ancient samples of DNA and identification of infectious agents. Using PCR, copies of very small amounts of DNA sequences are exponentially amplified in a series of cycles of temperature changes. PCR is now a common and often indispensable technique used in medical laboratory research for a broad variety of applications including biomedical research and forensic science . [ 2 ] [ 3 ]
The majority of PCR methods rely on thermal cycling . Thermal cycling exposes reagents to repeated cycles of heating and cooling to permit different temperature-dependent reactions—specifically, DNA melting and enzyme -driven DNA replication . PCR employs two main reagents— primers (which are short single strand DNA fragments known as oligonucleotides that are a complementary sequence to the target DNA region) and a thermostable DNA polymerase . In the first step of PCR, the two strands of the DNA double helix are physically separated at a high temperature in a process called nucleic acid denaturation . In the second step, the temperature is lowered and the primers bind to the complementary sequences of DNA. The two DNA strands then become templates for DNA polymerase to enzymatically assemble a new DNA strand from free nucleotides , the building blocks of DNA. As PCR progresses, the DNA generated is itself used as a template for replication, setting in motion a chain reaction in which the original DNA template is exponentially amplified.
Almost all PCR applications employ a heat-stable DNA polymerase , such as Taq polymerase , an enzyme originally isolated from the thermophilic bacterium Thermus aquaticus . If the polymerase used was heat-susceptible, it would denature under the high temperatures of the denaturation step. Before the use of Taq polymerase, DNA polymerase had to be manually added every cycle, which was a tedious and costly process. [ 4 ]
Applications of the technique include DNA cloning for sequencing , gene cloning and manipulation, gene mutagenesis; construction of DNA-based phylogenies , or functional analysis of genes ; diagnosis and monitoring of genetic disorders ; amplification of ancient DNA; [ 5 ] analysis of genetic fingerprints for DNA profiling (for example, in forensic science and parentage testing ); and detection of pathogens in nucleic acid tests for the diagnosis of infectious diseases .
PCR amplifies a specific region of a DNA strand (the DNA target). Most PCR methods amplify DNA fragments of between 0.1 and 10 kilo-base pairs (kbp) in length, although some techniques allow for amplification of fragments up to 40 kbp. [ 6 ] The amount of amplified product is determined by the available substrates in the reaction, which becomes limiting as the reaction progresses. [ 7 ]
A basic PCR set-up requires several components and reagents, [ 8 ] including:
The reaction is commonly carried out in a volume of 10–200 μL in small reaction tubes (0.2–0.5 mL volumes) in a thermal cycler . The thermal cycler heats and cools the reaction tubes to achieve the temperatures required at each step of the reaction (see below). Many modern thermal cyclers make use of a Peltier device , which permits both heating and cooling of the block holding the PCR tubes simply by reversing the device's electric current. Thin-walled reaction tubes permit favorable thermal conductivity to allow for rapid thermal equilibrium. Most thermal cyclers have heated lids to prevent condensation at the top of the reaction tube. Older thermal cyclers lacking a heated lid require a layer of oil on top of the reaction mixture or a ball of wax inside the tube. [ citation needed ]
Typically, PCR consists of a series of 20–40 repeated temperature changes, called thermal cycles, with each cycle commonly consisting of two or three discrete temperature steps (see figure below). The cycling is often preceded by a single temperature step at a very high temperature (>90 °C (194 °F)), and followed by one hold at the end for final product extension or brief storage. The temperatures used and the length of time they are applied in each cycle depend on a variety of parameters, including the enzyme used for DNA synthesis, the concentration of bivalent ions and dNTPs in the reaction, and the melting temperature ( T m ) of the primers. [ 12 ] The individual steps common to most PCR methods are as follows:
To check whether the PCR successfully generated the anticipated DNA target region (also sometimes referred to as the amplimer or amplicon ), agarose gel electrophoresis may be employed for size separation of the PCR products. The size of the PCR products is determined by comparison with a DNA ladder , a molecular weight marker which contains DNA fragments of known sizes, which runs on the gel alongside the PCR products.
As with other chemical reactions, the reaction rate and efficiency of PCR are affected by limiting factors. Thus, the entire PCR process can further be divided into three stages based on reaction progress:
In practice, PCR can fail for various reasons, such as sensitivity or contamination. [ 17 ] [ 18 ] Contamination with extraneous DNA can lead to spurious products and is addressed with lab protocols and procedures that separate pre-PCR mixtures from potential DNA contaminants. [ 8 ] For instance, if DNA from a crime scene is analyzed, a single DNA molecule from lab personnel could be amplified and misguide the investigation. Hence the PCR-setup areas is separated from the analysis or purification of other PCR products, disposable plasticware used, and the work surface between reaction setups needs to be thoroughly cleaned.
Specificity can be adjusted by experimental conditions so that no spurious products are generated. Primer-design techniques are important in improving PCR product yield and in avoiding the formation of unspecific products. The usage of alternate buffer components or polymerase enzymes can help with amplification of long or otherwise problematic regions of DNA. For instance, Q5 polymerase is said to be ≈280 times less error-prone than Taq polymerase. [ 19 ] [ 20 ] Both the running parameters (e.g. temperature and duration of cycles), or the addition of reagents, such as formamide , may increase the specificity and yield of PCR. [ 21 ] Computer simulations of theoretical PCR results ( Electronic PCR ) may be performed to assist in primer design. [ 22 ]
PCR allows isolation of DNA fragments from genomic DNA by selective amplification of a specific region of DNA. This use of PCR augments many ways, such as generating hybridization probes for Southern or northern hybridization and DNA cloning , which require larger amounts of DNA, representing a specific DNA region. PCR supplies these techniques with high amounts of pure DNA, enabling analysis of DNA samples even from very small amounts of starting material. [ citation needed ]
Other applications of PCR include DNA sequencing to determine unknown PCR-amplified sequences in which one of the amplification primers may be used in Sanger sequencing , isolation of a DNA sequence to expedite recombinant DNA technologies involving the insertion of a DNA sequence into a plasmid , phage , or cosmid (depending on size) or the genetic material of another organism. Bacterial colonies (such as E. coli ) can be rapidly screened by PCR for correct DNA vector constructs. [ 23 ] PCR may also be used for genetic fingerprinting ; a forensic technique used to identify a person or organism by comparing experimental DNAs through different PCR-based methods. [ citation needed ]
Some PCR fingerprint methods have high discriminative power and can be used to identify genetic relationships between individuals, such as parent-child or between siblings, and are used in paternity testing (Fig. 4). This technique may also be used to determine evolutionary relationships among organisms when certain molecular clocks are used (i.e. the 16S rRNA and recA genes of microorganisms). [ 24 ]
Because PCR amplifies the regions of DNA that it targets, PCR can be used to analyze extremely small amounts of sample. This is often critical for forensic analysis , when only a trace amount of DNA is available as evidence. PCR may also be used in the analysis of ancient DNA that is tens of thousands of years old. These PCR-based techniques have been successfully used on animals, such as a forty-thousand-year-old mammoth , and also on human DNA, in applications ranging from the analysis of Egyptian mummies to the identification of a Russian tsar and the body of English king Richard III . [ 25 ]
Quantitative PCR or Real Time PCR (qPCR, [ 26 ] not to be confused with RT-PCR ) methods allow the estimation of the amount of a given sequence present in a sample—a technique often applied to quantitatively determine levels of gene expression . Quantitative PCR is an established tool for DNA quantification that measures the accumulation of DNA product after each round of PCR amplification.
qPCR allows the quantification and detection of a specific DNA sequence in real time since it measures concentration while the synthesis process is taking place. There are two methods for simultaneous detection and quantification. The first method consists of using fluorescent dyes that are retained nonspecifically in between the double strands. The second method involves probes that code for specific sequences and are fluorescently labeled. Detection of DNA using these methods can only be seen after the hybridization of probes with its complementary DNA (cDNA) takes place. An interesting technique combination is real-time PCR and reverse transcription. This sophisticated technique, called RT-qPCR, allows for the quantification of a small quantity of RNA. Through this combined technique, mRNA is converted to cDNA, which is further quantified using qPCR. This technique lowers the possibility of error at the end point of PCR, [ 27 ] increasing chances for detection of genes associated with genetic diseases such as cancer. [ 5 ] Laboratories use RT-qPCR for the purpose of sensitively measuring gene regulation. The mathematical foundations for the reliable quantification of the PCR [ 28 ] and RT-qPCR [ 29 ] facilitate the implementation of accurate fitting procedures of experimental data in research, medical, diagnostic and infectious disease applications. [ 30 ] [ 31 ] [ 32 ] [ 33 ]
Prospective parents can be tested for being genetic carriers , or their children might be tested for actually being affected by a disease . [ 2 ] DNA samples for prenatal testing can be obtained by amniocentesis , chorionic villus sampling , or even by the analysis of rare fetal cells circulating in the mother's bloodstream. PCR analysis is also essential to preimplantation genetic diagnosis , where individual cells of a developing embryo are tested for mutations.
PCR allows for rapid and highly specific diagnosis of infectious diseases, including those caused by bacteria or viruses. [ 36 ] PCR also permits identification of non-cultivatable or slow-growing microorganisms such as mycobacteria , anaerobic bacteria , or viruses from tissue culture assays and animal models . The basis for PCR diagnostic applications in microbiology is the detection of infectious agents and the discrimination of non-pathogenic from pathogenic strains by virtue of specific genes. [ 36 ] [ 37 ]
Characterization and detection of infectious disease organisms have been revolutionized by PCR in the following ways:
The development of PCR-based genetic (or DNA ) fingerprinting protocols has seen widespread application in forensics :
PCR has been applied to many areas of research in molecular genetics:
PCR has a number of advantages. It is fairly simple to understand and to use, and produces results rapidly. The technique is highly sensitive with the potential to produce millions to billions of copies of a specific product for sequencing, cloning, and analysis. qRT-PCR shares the same advantages as the PCR, with an added advantage of quantification of the synthesized product. Therefore, it has its uses to analyze alterations of gene expression levels in tumors, microbes, or other disease states. [ 27 ]
PCR is a very powerful and practical research tool. The sequencing of unknown etiologies of many diseases are being figured out by the PCR. The technique can help identify the sequence of previously unknown viruses related to those already known and thus give us a better understanding of the disease itself. If the procedure can be further simplified and sensitive non-radiometric detection systems can be developed, the PCR will assume a prominent place in the clinical laboratory for years to come. [ 16 ]
One major limitation of PCR is that prior information about the target sequence is necessary in order to generate the primers that will allow its selective amplification. [ 27 ] This means that, typically, PCR users must know the precise sequence(s) upstream of the target region on each of the two single-stranded templates in order to ensure that the DNA polymerase properly binds to the primer-template hybrids and subsequently generates the entire target region during DNA synthesis.{{ [ 44 ] [ 45 ] }}
Like all enzymes, DNA polymerases are also prone to error, which in turn causes mutations in the PCR fragments that are generated. [ 46 ]
Another limitation of PCR is that even the smallest amount of contaminating DNA can be amplified, resulting in misleading or ambiguous results. To minimize the chance of contamination, investigators should reserve separate rooms for reagent preparation, the PCR, and analysis of product. Reagents should be dispensed into single-use aliquots . Pipettors with disposable plungers and extra-long pipette tips should be routinely used. [ 16 ] It is moreover recommended to ensure that the lab set-up follows a unidirectional workflow. No materials or reagents used in the PCR and analysis rooms should ever be taken into the PCR preparation room without thorough decontamination. [ 47 ]
Environmental samples that contain humic acids may inhibit PCR amplification and lead to inaccurate results. [ citation needed ]
The heat-resistant enzymes that are a key component in polymerase chain reaction were discovered in the 1960s as a product of a microbial life form that lived in the superheated waters of Yellowstone 's Mushroom Spring. [ 84 ]
A 1971 paper in the Journal of Molecular Biology by Kjell Kleppe and co-workers in the laboratory of H. Gobind Khorana first described a method of using an enzymatic assay to replicate a short DNA template with primers in vitro . [ 85 ] However, this early manifestation of the basic PCR principle did not receive much attention at the time and the invention of the polymerase chain reaction in 1983 is generally credited to Kary Mullis . [ 86 ] [ page needed ]
When Mullis developed the PCR in 1983, he was working in Emeryville , California for Cetus Corporation , one of the first biotechnology companies, where he was responsible for synthesizing short chains of DNA. Mullis has written that he conceived the idea for PCR while cruising along the Pacific Coast Highway one night in his car. [ 87 ] He was playing in his mind with a new way of analyzing changes (mutations) in DNA when he realized that he had instead invented a method of amplifying any DNA region through repeated cycles of duplication driven by DNA polymerase. In Scientific American , Mullis summarized the procedure: "Beginning with a single molecule of the genetic material DNA, the PCR can generate 100 billion similar molecules in an afternoon. The reaction is easy to execute. It requires no more than a test tube, a few simple reagents, and a source of heat." [ 88 ] DNA fingerprinting was first used for paternity testing in 1988. [ 89 ]
Mullis has credited his use of LSD as integral to his development of PCR: "Would I have invented PCR if I hadn't taken LSD? I seriously doubt it. I could sit on a DNA molecule and watch the polymers go by. I learnt that partly on psychedelic drugs." [ 90 ]
Mullis and biochemist Michael Smith , who had developed other essential ways of manipulating DNA, [ 1 ] were jointly awarded the Nobel Prize in Chemistry in 1993, seven years after Mullis and his colleagues at Cetus first put his proposal to practice. [ 91 ] Mullis's 1985 paper with R. K. Saiki and H. A. Erlich, "Enzymatic Amplification of β-globin Genomic Sequences and Restriction Site Analysis for Diagnosis of Sickle Cell Anemia"—the polymerase chain reaction invention (PCR)—was honored by a Citation for Chemical Breakthrough Award from the Division of History of Chemistry of the American Chemical Society in 2017. [ 92 ] [ 2 ]
At the core of the PCR method is the use of a suitable DNA polymerase able to withstand the high temperatures of >90 °C (194 °F) required for separation of the two DNA strands in the DNA double helix after each replication cycle. The DNA polymerases initially employed for in vitro experiments presaging PCR were unable to withstand these high temperatures. [ 2 ] So the early procedures for DNA replication were very inefficient and time-consuming, and required large amounts of DNA polymerase and continuous handling throughout the process.
The discovery in 1976 of Taq polymerase —a DNA polymerase purified from the thermophilic bacterium , Thermus aquaticus , which naturally lives in hot (50 to 80 °C (122 to 176 °F)) environments [ 14 ] such as hot springs—paved the way for dramatic improvements of the PCR method. The DNA polymerase isolated from T. aquaticus is stable at high temperatures remaining active even after DNA denaturation, [ 15 ] thus obviating the need to add new DNA polymerase after each cycle. [ 3 ] This allowed an automated thermocycler-based process for DNA amplification.
The PCR technique was patented by Kary Mullis and assigned to Cetus Corporation , where Mullis worked when he invented the technique in 1983. The Taq polymerase enzyme was also covered by patents. There have been several high-profile lawsuits related to the technique, including an unsuccessful lawsuit [ 93 ] brought by DuPont . The Swiss pharmaceutical company Hoffmann-La Roche purchased the rights to the patents in 1992. The last of the commercial PCR patents expired in 2017. [ 94 ]
A related patent battle over the Taq polymerase enzyme is still ongoing [ as of? ] in several jurisdictions around the world between Roche and Promega . The legal arguments have extended beyond the lives of the original PCR and Taq polymerase patents, which expired on 28 March 2005. [ 95 ] | https://en.wikipedia.org/wiki/Polymerase_chain_reaction |
PCR inhibitors are any factor which prevent the amplification of nucleic acids through the polymerase chain reaction (PCR). [ 1 ] PCR inhibition is the most common cause of amplification failure when sufficient copies of DNA are present. [ 2 ] PCR inhibitors usually affect PCR through interaction with DNA or interference with the DNA polymerase . Inhibitors can escape removal during the DNA purification procedure by binding directly to single or double-stranded DNA. [ 3 ] Alternatively, by reducing the availability of cofactors (such as Mg 2+ ) or otherwise interfering with their interaction with the DNA polymerase, PCR is inhibited. [ 3 ]
In a multiplex PCR reaction, it is possible for the different sequences to suffer from different inhibition effects to different extents, leading to disparity in their relative amplifications. [ 3 ]
Inhibitors may be present in the original sample, such as blood, fabrics, tissues and soil but may also be added as a result of the sample processing and DNA extraction techniques used. [ 3 ] Excess salts including KCl and NaCl, ionic detergents such as sodium deocycholate , sarkosyl and SDS , ethanol, isopropanol and phenol among others, all contribute via various inhibitory mechanisms, to the reduction of PCR efficiency. [ 3 ]
In order to try to assess the extent of inhibition that occurs in a reaction, a control can be performed by adding a known amount of a template to the investigated reaction mixture (based on the sample under analysis). By comparing the amplification of this template in the mixture to the amplification observed in a separate experiment in which the same template is used in the absence of inhibitors, the extent of inhibition in the investigated reaction mixture can be inferred. [ 4 ] [ 3 ] Of course, if any part of the inhibition occurring in the sample-derived reaction mixture is sequence-specific, then this method will yield an underestimate of the inhibition as it applies to the investigate sequence(s).
The method of sample acquisition can be refined to avoid unnecessary collection of inhibitors. For example, in forensics, swab-transfer of blood on fabric or saliva on food, may prevent or reduce contamination with inhibitors present in the fabric or food. [ 3 ]
Techniques exist and kits are commercially available to enable extraction of DNA to the exclusion of some inhibitors. [ 3 ]
As well as methods for the removal of inhibitors from samples before PCR, some DNA polymerases offer varying resistance to different inhibitors and increasing the concentration of the chosen DNA polymerase also confers some resistance to polymerase-targeted inhibitors. [ 3 ]
For PCR based on blood samples, the addition of bovine serum albumin reduces the effect of some inhibitors on PCR. [ 3 ] | https://en.wikipedia.org/wiki/Polymerase_chain_reaction_inhibitors |
The polymerase chain reaction (PCR) is a commonly used molecular biology tool for amplifying DNA, and various techniques for PCR optimization which have been developed by molecular biologists to improve PCR performance and minimize failure.
The PCR method is extremely sensitive, requiring only a few DNA molecules in a single reaction for amplification across several orders of magnitude. Therefore, adequate measures to avoid contamination from any DNA present in the lab environment ( bacteria , viruses , or human sources) are required. Because products from previous PCR amplifications are a common source of contamination, many molecular biology labs have implemented procedures that involve dividing the lab into separate areas. [ 1 ] One lab area is dedicated to preparation and handling of pre-PCR reagents and the setup of the PCR reaction, and another area to post-PCR processing, such as gel electrophoresis or PCR product purification. For the setup of PCR reactions, many standard operating procedures involve using pipettes with filter tips and wearing fresh laboratory gloves , and in some cases a laminar flow cabinet with UV lamp as a work station (to destroy any extraneomultimer formation). PCR is routinely assessed against a negative control reaction that is set up identically to the experimental PCR, but without template DNA, and performed alongside the experimental PCR.
Secondary structures in the DNA can result in folding or knotting of DNA template or primers, leading to decreased product yield or failure of the reaction. Hairpins , which consist of internal folds caused by base-pairing between nucleotides in inverted repeats within single-stranded DNA, are common secondary structures and may result in failed PCRs.
Typically, primer design that includes a check for potential secondary structures in the primers, or addition of DMSO or glycerol to the PCR to minimize secondary structures in the DNA template, [ 2 ] are used in the optimization of PCRs that have a history of failure due to suspected DNA hairpins.
Taq polymerase lacks a 3′ to 5′ exonuclease activity . Thus, Taq has no error- proof-reading activity , which consists of excision of any newly misincorporated nucleotide base from the nascent (i.e., extending) DNA strand that does not match with its opposite base in the complementary DNA strand. The lack in 3′ to 5′ proofreading of the Taq enzyme results in a high error rate (mutations per nucleotide per cycle) of approximately 1 in 10,000 bases, which affects the fidelity of the PCR, especially if errors occur early in the PCR with low amounts of starting material, causing accumulation of a large proportion of amplified DNA with incorrect sequence in the final product. [ 3 ]
Several "high-fidelity" thermostable DNA polymerases , having engineered 3′ to 5′ exonuclease activity, have become available that permit more accurate amplification for use in PCRs for sequencing or cloning of products. Examples of polymerases with 3′ to 5′ exonuclease activity include: KOD DNA polymerase, a recombinant form of Thermococcus kodakaraensis KOD1; Vent, which is extracted from Thermococcus litoralis ; Pfu DNA polymerase , which is extracted from Pyrococcus furiosus ; Pwo, which is extracted from Pyrococcus woesii ; [ 4 ] Q5 polymerase, with 280x higher fidelity amplification compared with Taq . [ 5 ]
Magnesium is required as a co-factor for thermostable DNA polymerase. Taq polymerase is a magnesium-dependent enzyme and determining the optimum concentration to use is critical to the success of the PCR reaction. [ 6 ] Some of the components of the reaction mixture such as template concentration, dNTPs and the presence of chelating agents ( EDTA ) or proteins can reduce the amount of free magnesium present thus reducing the activity of the enzyme. [ 7 ] Primers which bind to incorrect template sites are stabilized in the presence of excessive magnesium concentrations and so results in decreased specificity of the reaction. Excessive magnesium concentrations also stabilize double stranded DNA and prevent complete denaturation of the DNA during PCR reducing the product yield. [ 6 ] [ 7 ] Inadequate thawing of MgCl 2 may result in the formation of concentration gradients within the magnesium chloride solution supplied with the DNA polymerase and also contributes to many failed experiments . [ 7 ]
PCR works readily with a DNA template of up to two to three thousand base pairs in length. However, above this size, product yields often decrease, as with increasing length stochastic effects such as premature termination by the polymerase begin to affect the efficiency of the PCR. It is possible to amplify larger pieces of up to 50,000 base pairs with a slower heating cycle and special polymerases. These are polymerases fused to a processivity-enhancing DNA-binding protein, enhancing adherence of the polymerase to the DNA. [ 8 ] [ 9 ]
Other valuable properties of the chimeric polymerases TopoTaq and PfuC2 include enhanced thermostability, specificity and resistance to contaminants and inhibitors . [ 10 ] [ 11 ] They were engineered using the unique helix-hairpin-helix (HhH) DNA binding domains of topoisomerase V [ 12 ] from hyperthermophile Methanopyrus kandleri . Chimeric polymerases overcome many limitations of native enzymes and are used in direct PCR amplification from cell cultures and even food samples , thus by-passing laborious DNA isolation steps. A robust strand-displacement activity of the hybrid TopoTaq polymerase helps solve PCR problems that can be caused by hairpins and G-loaded double helices. Helices with a high G-C content possess a higher melting temperature, often impairing PCR, depending on the conditions. [ 13 ]
Non-specific binding of primers frequently occurs and may occur for several reasons. These include repeat sequences in the DNA template, non-specific binding between primer and template, high or low G-C content in the template, or incomplete primer binding, leaving the 5' end of the primer unattached to the template. Non-specific binding of degenerate primers is also common. Manipulation of annealing temperature and magnesium ion concentration may be used to increase specificity. For example, lower concentrations of magnesium or other cations may prevent non-specific primer interactions, thus enabling successful PCR. A "hot-start" polymerase enzyme whose activity is blocked unless it is heated to high temperature (e.g., 90–98˚C) during the denaturation step of the first cycle, is commonly used to prevent non-specific priming during reaction preparation at lower temperatures. Chemically mediated hot-start PCRs require higher temperatures and longer incubation times for polymerase activation, compared with antibody or aptamer-based hot-start PCRs. [ citation needed ]
Other methods to increase specificity include Nested PCR and Touchdown PCR .
Computer simulations of theoretical PCR results ( Electronic PCR ) may be performed to assist in primer design. [ 14 ]
Touchdown polymerase chain reaction or touchdown style polymerase chain reaction is a method of polymerase chain reaction by which primers will avoid amplifying nonspecific sequences. The annealing temperature during a polymerase chain reaction determines the specificity of primer annealing. The melting point of the primer sets the upper limit on annealing temperature. At temperatures just below this point, only very specific base pairing between the primer and the template will occur. At lower temperatures, the primers bind less specifically. Nonspecific primer binding obscures polymerase chain reaction results, as the nonspecific sequences to which primers anneal in early steps of amplification will "swamp out" any specific sequences because of the exponential nature of polymerase amplification.
The earliest steps of a touchdown polymerase chain reaction cycle have high annealing temperatures. The annealing temperature is decreased in increments for every subsequent set of cycles (the number of individual cycles and increments of temperature decrease is chosen by the experimenter). The primer will anneal at the highest temperature which is least-permissive of nonspecific binding that it is able to tolerate. Thus, the first sequence amplified is the one between the regions of greatest primer specificity; it is most likely that this is the sequence of interest. These fragments will be further amplified during subsequent rounds at lower temperatures, and will out compete the nonspecific sequences to which the primers may bind at those lower temperatures. If the primer initially (during the higher-temperature phases) binds to the sequence of interest, subsequent rounds of polymerase chain reaction can be performed upon the product to further amplify those fragments.
Annealing of the 3' end of one primer to itself or the second primer may cause primer extension, resulting in the formation of so-called primer dimers, visible as low-molecular-weight bands on PCR gels . [ 15 ] Primer dimer formation often competes with formation of the DNA fragment of interest, and may be avoided using primers that are designed such that they lack complementarity —especially at the 3' ends—to itself or the other primer used in the reaction. If primer design is constrained by other factors and if primer-dimers do occur, methods to limit their formation may include optimisation of the MgCl 2 concentration or increasing the annealing temperature in the PCR. [ 15 ]
Deoxynucleotides (dNTPs) may bind Mg 2+ ions and thus affect the concentration of free magnesium ions in the reaction. In addition, excessive amounts of dNTPs can increase the error rate of DNA polymerase and even inhibit the reaction. [ 6 ] [ 7 ] An imbalance in the proportion of the four dNTPs can result in misincorporation into the newly formed DNA strand and contribute to a decrease in the fidelity of DNA polymerase. [ 16 ] | https://en.wikipedia.org/wiki/Polymerase_chain_reaction_optimization |
Polymerase cycling assembly (or PCA , also known as Assembly PCR ) is a method for the assembly of large DNA oligonucleotides from shorter fragments. The process uses the same technology as PCR , but takes advantage of DNA hybridization and annealing as well as DNA polymerase to amplify a complete sequence of DNA in a precise order based on the single stranded oligonucleotides used in the process. It thus allows for the production of synthetic genes and even entire synthetic genomes .
Much like how primers are designed such that there is a forward primer and a reverse primer capable of allowing DNA polymerase to fill the entire template sequence, PCA uses the same technology but with multiple oligonucleotides. While in PCR the customary size of oligonucleotides used is 18 base pairs, in PCA lengths of up to 50 are used to ensure uniqueness and correct hybridization.
Each oligonucleotide is designed to be either part of the top or bottom strand of the target sequence. As well as the basic requirement of having to be able to tile the entire target sequence, these oligonucleotides must also have the usual properties of similar melting temperatures, hairpin free, and not too GC rich to avoid the same complications as PCR.
During the polymerase cycles, the oligonucleotides anneal to complementary fragments and then are filled in by polymerase. Each cycle thus increases the length of various fragments randomly depending on which oligonucleotides find each other. It is critical that there is complementarity between all the fragments in some way or a final complete sequence will not be produced as polymerase requires a template to follow.
After this initial construction phase, additional primers encompassing both ends are added to perform a regular PCR reaction, amplifying the target sequence away from all the shorter incomplete fragments. A gel purification can then be used to identify and isolate the complete sequence.
A typical reaction consists of oligonucleotides ~50 base pairs long each overlapping by about 20 base pairs. The reaction with all the oligonucleotides is then carried out for ~30 cycles followed by an additional 23 cycles with the end primers. [ 1 ]
A modification of this method, Gibson assembly , described by Gibson et al. allows for single-step isothermal assembly of DNA with up to several hundreds kb . By using T5 exonuclease to 'chew back' complementary ends, an overlap of about 40bp can be created. The reaction takes place at 50 °C, a temperature where the T5 exonuclease is unstable. After a short timestep it is degraded, the overlaps can anneal and be ligated. [ 2 ] Cambridge University IGEM team made a video describing the process. [ 3 ] Ligation independent cloning (LIC) is a new variant of the method for compiling several DNA pieces together and needing only exonuclease enzyme for the reaction. | https://en.wikipedia.org/wiki/Polymerase_cycling_assembly |
Polymerase stuttering is the process by which a polymerase transcribes a nucleotide several times without progressing further on the mRNA chain. It is often used in addition of poly A tails or capping mRNA chains by less complex organisms such as viruses.
A polymerase may undergo stuttering as a probability controlled event, hence it is not explicitly controlled by any mechanisms in the translation process. Generally, it is a result of many short repeated frameshifts on a slippery sequence of nucleotides on the mRNA strand. [ 1 ] However, the frameshift is restricted to one (in some cases two [ 2 ] ) nucleotides with a pseudoknot or choke points on both sides of the sequence.
A polymerase that exhibits this behavior is RNA-dependent RNA polymerase , present in many RNA viruses . Reverse transcriptase has also been observed to undergo this polymerase stuttering. [ 3 ] | https://en.wikipedia.org/wiki/Polymerase_stuttering |
A polymeric foam is a special foam , in liquid or solidified form, formed from polymers . [ 1 ]
Examples include: | https://en.wikipedia.org/wiki/Polymeric_foam |
Polymeric materials have widespread application due to their versatile characteristics, cost-effectiveness, and highly tailored production. The science of polymer synthesis allows for excellent control over the properties of a bulk polymer sample. However, surface interactions of polymer substrates are an essential area of study in biotechnology , nanotechnology , and in all forms of coating applications. In these cases, the surface characteristics of the polymer and material, and the resulting forces between them largely determine its utility and reliability. In biomedical applications for example, the bodily response to foreign material, and thus biocompatibility , is governed by surface interactions. In addition, surface science is integral part of the formulation, manufacturing, and application of coatings. [ 1 ]
A polymeric material can be functionalized by the addition of small moieties , oligomers , and even other polymers (grafting copolymers) onto the surface or interface.
Grafting, in the context of polymer chemistry , refers to the addition of polymer chains onto a surface. In the so-called 'grafting onto' mechanism, a polymer chain adsorbs onto a surface out of solution. In the more extensive 'grafting from' mechanism, a polymer chain is initiated and propagated at the surface. Because pre-polymerized chains used in the 'grafting onto' method have a thermodynamically favored conformation in solution (an equilibrium hydrodynamic volume), their adsorption density is self-limiting. The radius of gyration of the polymer therefore is the limiting factor in the number of polymer chains that can reach the surface and adhere . The 'grafting from' technique circumvents this phenomenon and allows for greater grafting densities.
The processes of grafting "onto", "from", and "through" are all different ways to alter the chemical reactivity of the surface they attach with. Grafting onto allows a preformed polymer, generally in a "mushroom regime", to adhere to the surface of either a droplet or bead in solution. Due to the larger volume of the coiled polymer and the steric hindrance this causes, the grafting density is lower for 'onto' in comparison to 'grafting from'. The surface of the bead is wetted by the polymer and the interaction in the solution caused the polymer to become more flexible. The 'extended conformation' of the polymer grafted, or polymerized, from the surface of the bead means that the monomer must be in the solution and there for lyophilic . This results with a polymer that has favorable interactions with the solution, allowing the polymer to form more linearly. Grafting from therefore has a higher grafting density since there are more access to chain ends.
Peptide synthesis can provide one example of a 'grafting from' synthetic process. In this process, an amino acid chain is grown by a series of condensation reaction from a polymer bead surface. This grafting technique allows for excellent control over the peptide composition as the bonded chain can be washed without desorption from the polymer.
Polymeric coatings are another area of applied grafting techniques. In the formulation of water-borne paint, latex particles are often surface modified to control particle dispersion and thus coating characteristics such as viscosity , film formation, and environmental stability ( UV exposure and temperature variations).
Plasma processing, corona treatment, and flame treatment can all be classified as surface oxidation mechanisms. These methods all involve cleavage of polymer chains in the material and the incorporation of carbonyl , and hydroxyl functional groups . [ 2 ] The incorporation of oxygen into the surface creates a higher surface energy allowing the substrate to be coated.
Corona treatment is a surface modification method using a low temperature corona discharge to increase the surface energy of a material, often polymers and natural fibers. Most commonly, a thin polymer sheet is rolled through an array of high-voltage electrodes, using the plasma created to functionalize the surface. The limited penetration depth of such treatment provides vastly improved adhesion while preserving bulk mechanical properties.
Commercially, corona treatment has been used widely for improved dye adhesion before printing text and images on plastic packaging materials. The hazardous nature of remnant ozone after corona treatment stipulates careful filtration and ventilation during processing, restricting its implementation to applications with strict catalytic filtered systems. This limitation prevents widespread use within open-line manufacturing processes
Several factors influence the efficiency of the flame treatment such as air-to-gas ratio, thermal output, surface distance, and oxidation zone dwell time. Upon conception of the process, a corona treatment immediately followed film extrusions, but the development of careful transportation techniques allows treatment at an optimized location. Conversely, in-line corona treatments have been implemented into full-scale production lines such as those in the newspaper industry. These in-line solutions are developed to counteract the decrease in wetting characteristics caused by excessive solvent use. [ 3 ]
Plasma processing provides interfacial energies and injected monomer fragments larger than comparable processes. However, limited fluxes prevent high process rates. In addition, plasmas are thermodynamically unfavorable and therefore plasma-processed surfaces lack uniformity, consistency, and permanence. These obstacles with plasma processing preclude it from being a competitive surface modification method within industry.
The process begins with production of plasma via ionization either by deposition on monomer mixtures or gaseous carrier ions. The power required to produce the necessary plasma flux can be derived from the active volume mass/energy balance: [ 4 ]
∫ V o l I k i o n n e n 0 d V o l I = n e τ n V o l I {\displaystyle \textstyle \int \limits _{{V\!ol}_{I}}{k^{ion}}{n_{e}}{n_{0}}\,d{{V\!ol}_{I}}={\frac {n_{e}}{\tau _{n}}}{V\!ol_{I}}}
where
V o l I {\displaystyle {{V\!ol}_{I}}} is the active volume
k i o n {\displaystyle k^{ion}} is the ionization rate
n 0 {\displaystyle n_{0}} is the neutral density
n e {\displaystyle n_{e}} is the electron density
τ n {\displaystyle \tau _{n}} is the ion loss by diffusion, convection, attachment, and recombination
Dissipation is generally initiated via direct current (DC), radio frequency (RF), or microwave power. Gas ionization efficiency can decrease the power efficiency more than tenfold depending on the carrier plasma and substrate.
Flame treatment is a controlled, rapid, cost-effective method of increasing surface energy and wettability of polyolefins and metallic components. This high-temperature plasma treatment uses ionized gaseous oxygen via jet flames across a surface to add polar functional groups while melting the surface molecules, locking them into place upon cooling.
Thermoplastic polyethylene and polypropylene treated with brief oxygen plasma exposure have seen contact angles as low as 22°, and the resulting surface modification can last years with proper packaging. Flame plasma treatment has become increasingly popular with intravascular devices such as balloon catheters due to the precision and cost-effectiveness demanded in the medical industry. [ 5 ]
Grafting copolymers to a surface can be envisioned as fixing polymeric chains to a structurally different polymer substrate with the intention of changing surface functionality while preserving bulk mechanical properties. The nature and degree of surface functionalization is determined by both the choice of copolymer and the type and extent of grafting.
The modification of inert surfaces of polyolefins, polyesters, and polyamides by grafting functional vinyl monomers has been used to increase hydrophobicity, dye absorption, and polymer adhesion. This photografting method is generally used during continuous filament or thin film processing. On a bulk commercial scale, the grafting technique is referred to as photoinitiated lamination, where desired surfaces are joined by grafting a polymeric adhesion network between the two films. The low adhesion and absorption of polyolefins, polyesters, and polyamides is improved by UV-irradiation of an initiator and monomer transferred through the vapor phase to the substrate. Functionalization of porous surfaces have seen great success with high temperature photografting techniques.
In microfluidic chips, functionalizing channels allows directed flow to preserve lamellar behavior between and within junctions. [ 6 ] The adverse turbulent flow in microfluidic applications can compound component failure modes due to the increased level of channel interdependency and network complexity. In addition, the imprinted design of microfluidic channels can be reproduced for photografting the corresponding channels with a high degree of accuracy. [ 7 ]
In industrial corona and plasma processes, cost-efficient and rapid analytical methods are required for confirming adequate surface functionality on a given substrate. Measuring the surface energy is an indirect method for confirming the presence of surface functional groups without the need for microscopy or spectroscopy, often expensive and demanding tools. Contact angle measurement (goniometry) can be used to find the surface energy of the treated and non-treated surface. Young's relation can be used to find surface energy assuming the simplification of experimental conditions to a three phase equilibrium (i.e. liquid drop applied to flat rigid solid surface in a controlled atmosphere), yielding
γ S G = γ S L + γ L G cos θ c {\displaystyle {\boldsymbol {\gamma }}_{SG}={\boldsymbol {\gamma }}_{SL}+{\boldsymbol {\gamma }}_{LG}~{\cos {{\boldsymbol {\theta }}_{c}}}}
where
γ i j {\displaystyle {\boldsymbol {\gamma }}_{ij}} denotes the surface energy of the solid–liquid, liquid–gas, or solid–gas interface
θ c {\displaystyle {{\boldsymbol {\theta }}_{c}}} is the measured contact angle
A series of solutions with known surface tension (e.g., Dyne solutions) can be used to estimate the surface energy of the polymer substrate qualitatively by observing the wettability of each. These methods are applicable to macroscopic surface oxidation, as in industrial processing.
In the case of oxidizing treatments, spectra taken from treated surfaces will indicate the presence of functionalities in carbonyl and hydroxyl regions according to the Infrared spectroscopy correlation table .
X-ray photoelectron spectroscopy (XPS) and Energy-dispersive X-ray spectroscopy (EDS/EDX) are composition characterization techniques that use x-ray excitation of electrons to discrete energy levels to quantify chemical composition. These techniques provide characterization at surface depths of 1–10 nanometers, approximately the range of oxidation in plasma and corona treatments. In addition, these processes offer the benefit of characterizing microscopic variations in surface composition.
In the context of plasma processed polymer surfaces, oxidized surfaces will obviously show a greater oxygen content. Elemental analysis allows for quantitative data to be obtained and used in the analysis of process efficiency.
Atomic force microscopy (AFM), a type of scanning force microscopy , was developed for mapping three-dimensional topographical variations in atomic surfaces with high resolution (on the order of fraction of nanometers). AFM was developed to overcome the material conduction limitations of electron transmission and scanning microscopy methods (SEM & STM). Invented by Binnig, Quate, and Gerbe in 1985, atomic force microscopy uses laser beam deflection to measure the variations in atomic surfaces. The method does not rely on the variation in electron conduction through the material, as the scanning tunneling microscope (STM) does, and therefore allow microscopy on nearly all materials, including polymers.
The application of AFM on polymeric surfaces is especially favorable because polymer general lack of crystallinity leads to large variations in surface topography. Surface functionalization techniques such as grafting, corona treatment, and plasma processing increase the surface roughness greatly (compared to the unprocessed substrate surface) and are therefore accurately measured by AFM. [ 8 ]
Biomaterial surfaces are often modified using light-activated mechanisms (such as photografting ) to functionalize the surface without compromising bulk mechanical properties.
The modification of surfaces to keep polymers biologically inert has found wide uses in biomedical applications such as cardiovascular stents and in many skeletal prostheses. Functionalizing polymer surfaces can inhibit protein adsorption, which may otherwise initiate cellular interrogation upon the implant, a predominant failure mode of medical prostheses.
Narrow biocompatibility requirements within the medical industry have over the past ten years driven surface modification techniques to reach an unprecedented level of accuracy.
In water-borne coatings, an aqueous polymer dispersion creates a film on the substrate once the solvent has evaporated. Surface functionalization of the polymer particles is a key component of a coating formulation allowing control over such properties as dispersion, film formation temperature, and the coating rheology. Dispersing aids often involve steric or electrostatic repulsion of the polymer particles, providing colloidal stability. The dispersing aids adsorb (as in a grafting onto scheme) onto latex particles giving them functionality. The association of other additives, such as thickeners shown in the schematic to the right, with adsorbed polymer material give rise to complex rheological behavior and excellent control over a coating's flow properties. [ 12 ] | https://en.wikipedia.org/wiki/Polymeric_surface |
Polymerization-induced phase separation ( PIPS ) is the occurrence of phase separation in a multicomponent mixture induced by the polymerization of one or more components. [ 1 ] [ 2 ] The increase in molecular weight of the reactive component renders one or more components to be mutually immiscible in one another, resulting in spontaneous phase segregation.
Polymerization-induced phase separation can be initiated either through thermally induced polymerization or photopolymerization. [ 3 ] [ 4 ] [ 5 ] The process general occurs through spinodal decomposition , commonly resulting in the formation of co-continuous phases. [ 6 ]
The morphology of the final phase separated structures are generally random owing to the stochastic nature of the onset and process of phase separation. Several approaches have been investigated to control morphology. Tran-Cong-Miyata and co-workers using periodic irradiation in photoreactive polymer blends to control morphology, specifically width of the resultant spinodal modes in the phase separated morphology. [ 7 ] Li and co-workers employed holography, a process of holographic polymerization, in to order to direct the phase separated structure to have the same patterns as the holographic field. [ 8 ] Recently, Hosein and co-workers demonstrated that nonlinear optical pattern formations that occur in photopolymer systems may be used to direct the organization of blends to have the same morphology as the light pattern. [ 9 ]
The process is commonly used in control of the morphology of polymer blends, for applications in thermoelectrics, solid-state lighting, polymer electrolytes, composites, membrane formation, and surface pattern formations. [ 10 ] | https://en.wikipedia.org/wiki/Polymerization-induced_phase_separation |
Polymers of intrinsic microporosity ( PIMs ) are a unique class of microporous material developed by research efforts led by Neil McKeown , Peter Budd , et al. [ 1 ] PIMs contain a continuous network of interconnected intermolecular voids less than 2 nm in width. Classified as a porous organic polymer, PIMs generate porosity from their rigid and contorted macromolecular chains that do not efficiently pack in the solid state. [ 2 ] PIMs are composed of a fused ring sequences interrupted by Spiro-centers or other sites of contortion along the backbone. Due to their fused ring structure PIMs cannot rotate freely along the polymer backbone, ensuring the macromolecular components conformation cannot rearrange and ensuring the highly contorted shape is fixed during synthesis.
PIMs require that the non-network macromolecular structure is rigid and non-linear. In order to maintain permanent microporosity the rotation along the polymer chain must be prohibited through the use of fused ring structure or strongly hindered by steric inhibition to avoid conformation changes that would allow the polymer to pack efficiently. This results in the use of a conformationally locked monomer and a polymerization reaction that provides a linkage about which rotation is prohibited. [ 3 ] Three main types of polymerization reactions have been successfully used to prepare PIMs of sufficient mass to form self-standing films. These involve a polymerization reaction based on a double aromatic nucleophilic substitution mechanism to form the dibenzodioxin linkage, a polymerization using Troger's base formation, and amide linkages formation between monomeric units. [ 3 ] It is also possible to modify the structure of PIMs by post-synthesis reactions. [ 4 ] However, this can result in a reduction in intrinsic microporosity due to the additional interchain cohesive interactions.
Due to the presence of intrinsic microporosity these polymers have high-free volume, high internal surface area, and have a high affinity for gases. A novel property of PIMs is that they do not possess a network structure and are often freely soluble in organic solvents. [ 5 ] This allows PIMs to be precipitated or cast from solution to give microporous powders or self-standing films that are useful for a variety of applications. For example the first commercial application of PIMs was in a sensor developed by 3M . [ 6 ] Additionally, due to PIMs affinity for small gases and ability to form self-standing films they are actively being investigated as a membrane material and adsorbent for industrial separation processes such as gas separation and carbon dioxide capture . PIM membranes are also heavily investigated due to their contribution in the revision of the 2008 upper bounds of performance by Robeson, [ 7 ] [ clarification needed ] an important parameter in membrane gas separation stating that the permeability must be sacrificed for selectivity. Specifically active areas of PIM membrane research include, enhancing permeability, decreasing aging, and tailoring selectivity. PIMs are also used to create mixed matrix membranes with a variety of material such as inorganic materials, metal-organic frameworks, and carbons. | https://en.wikipedia.org/wiki/Polymers_of_intrinsic_microporosity |
In biotechnology , polymersomes [ 1 ] are a class of artificial vesicles , tiny hollow spheres that enclose a solution. Polymersomes are made using amphiphilic synthetic block copolymers to form the vesicle membrane, and have radii ranging from 50 nm to 5 μm or more. [ 2 ] Most reported polymersomes contain an aqueous solution in their core and are useful for encapsulating and protecting sensitive molecules, such as drugs, enzymes, other proteins and peptides, and DNA and RNA fragments. The polymersome membrane provides a physical barrier that isolates the encapsulated material from external materials, such as those found in biological systems.
Synthosomes are polymersomes engineered to contain channels ( transmembrane proteins ) that allow certain chemicals to pass through the membrane, into or out of the vesicle. This allows for the collection or enzymatic modification of these substances. [ 3 ]
The term "polymersome" for vesicles made from block copolymers was coined in 1999. [ 1 ] Polymersomes are similar to liposomes , which are vesicles formed from naturally occurring lipids . While having many of the properties of natural liposomes, polymersomes exhibit increased stability and reduced permeability. Furthermore, the use of synthetic polymers enables designers to manipulate the characteristics of the membrane and thus control permeability, release rates, stability and other properties of the polymersome.
Several different morphologies of the block copolymer used to create the polymersome have been used. The most frequently used are the linear diblock or triblock copolymers. In these cases, the block copolymer has one block that is hydrophobic ; the other block or blocks are hydrophilic . Other morphologies used include comb copolymers, [ 4 ] [ 5 ] where the backbone block is hydrophilic and the comb branches are hydrophobic, and dendronized block copolymers , [ 6 ] where the dendrimer portion is hydrophilic.
In the case of diblock, comb and dendronized copolymers the polymersome membrane has the same bilayer morphology of a liposome, with the hydrophobic blocks of the two layers facing each other in the interior of the membrane. In the case of triblock copolymers the membrane is a monolayer that mimics a bilayer, the central block filling the role of the two facing hydrophobic blocks of a bilayer. [ 7 ]
In general they can be prepared by the methods used in the preparation of liposomes. Film rehydration, direct injection method or dissolution method.
Polymersomes that contain active enzymes and that provide a way to selectively transport substrates for conversion by those enzymes have been described as nanoreactors. [ 8 ]
Polymersomes have been used to create controlled release drug delivery systems. [ 9 ] Similar to coating liposomes with polyethylene glycol , polymersomes can be made invisible to the immune system if the hydrophilic block consists of polyethylene glycol. [ 10 ] Thus, polymersomes are useful carriers for targeted medication.
For in vivo applications, polymersomes are de facto limited to the use of FDA -approved polymers, as most pharmaceutical firms are unlikely to develop novel polymers due to cost issues. Fortunately, there are a number of such polymers available, with varying properties, including:
Hydrophilic blocks
Hydrophobic blocks
If enough of the block copolymer molecules that make up a polymersome are cross-linked , the polymersome can be made into a transportable powder. [ 2 ]
Polymersomes can be used to make an artificial cell if hemoglobin and other components are added. [ 13 ] [ 14 ] The first artificial cell was made by Thomas Chang . [ 15 ] | https://en.wikipedia.org/wiki/Polymersome |
Polymethylhydrosiloxane ( PMHS ) is a polymer with the general structure [−CH 3 (H)Si−O−] . It is used in organic chemistry as a mild and stable reducing agent easily transferring hydrides to metal centers and a number of other reducible functional groups. [ 1 ] [ 2 ] A variety of related materials are available under the following CAS registry numbers 9004-73-3, 16066-09-4, 63148-57-2, 178873-19-3. These include the tetramer ( (MeSiHO) 4 ), copolymers of dimethylsiloxane and methylhydrosiloxane, and trimethylsilyl terminated materials.
This material is prepared by the hydrolysis of monomethyldichlorosilane CAS#: 75-54-7:
The related polymer polydimethylsiloxane (PDMS) is made similarly, but lacking Si−H bonds, it exhibits no reducing properties. Dimethyldichlorosilane CAS#: 75-78-5 is then used instead of monomethyldichlorosilane CAS#: 75-54-7.
Illustrative of its use, PMHS is used for in situ conversion of tributyltin oxide to tributyltin hydride : [ 3 ]
This article about polymer science is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Polymethylhydrosiloxane |
Many compound materials exhibit polymorphism , that is they can exist in different structures called polymorphs. Silicon carbide (SiC) is unique in this regard as more than 250 polymorphs of silicon carbide had been identified by 2006, [ 1 ] with some of them having a lattice constant as long as 301.5 nm, about one thousand times the usual SiC lattice spacings. [ 2 ]
The polymorphs of SiC include various amorphous phases observed in thin films and fibers, [ 3 ] as well as a large family of similar crystalline structures called polytypes . They are variations of the same chemical compound that are identical in two dimensions and differ in the third. Thus, they can be viewed as layers stacked in a certain sequence. The atoms of those layers can be arranged in three configurations, A, B or C, to achieve closest packing. The stacking sequence of those configurations defines the crystal structure, where the unit cell is the shortest periodically repeated sequence of the stacking sequence. This description is not unique to SiC, but also applies to other binary tetrahedral materials, such as zinc oxide and cadmium sulfide .
A shorthand has been developed to catalogue the vast number of possible polytype crystal structures: Let us define three SiC bilayer structures (that is 3 atoms with two bonds in between in the illustrations below) and label them as A, B and C. Elements A and B do not change the orientation of the bilayer (except for possible rotation by 120°, which does not change the lattice and is ignored hereafter); the only difference between A and B is shift of the lattice. Element C, however, twists the lattice by 60°.
Using those A,B,C elements, we can construct any SiC polytype. Shown above are examples of the hexagonal polytypes 2H, 4H and 6H as they would be written in the Ramsdell notation where the number indicates the layer and the letter indicates the Bravais lattice. [ 4 ] The 2H-SiC structure is equivalent to that of wurtzite and is composed of only elements A and B stacked as ABABAB. The 4H-SiC unit cell is two times longer, and the second half is twisted compared to 2H-SiC, resulting in ABCB stacking. The 6H-SiC cell is three times longer than that of 2H, and the stacking sequence is ABCACB. The cubic 3C-SiC, also called β-SiC, has ABC stacking. [ 5 ]
The different polytypes have widely ranging physical properties. 3C-SiC has the highest electron mobility and saturation velocity because of reduced phonon scattering resulting from the higher symmetry . The band gaps differ widely among the polytypes ranging from 2.3 eV for 3C-SiC to 3 eV in 6H SiC to 3.3 eV for 2H-SiC. In general, the greater the wurtzite component, the larger the band gap. Among the SiC polytypes, 6H is most easily prepared and best studied, while the 3C and 4H polytypes are attracting more attention for their superior electronic properties. The polytypism of SiC makes it nontrivial to grow single-phase material, but it also offers some potential advantages - if crystal growth methods can be developed sufficiently then heterojunctions of different SiC polytypes can be prepared and applied in electronic devices. [ 5 ]
All symbols in the SiC structures have a specific meaning: The number 3 in 3C-SiC refers to the three-bilayer periodicity of the stacking (ABC) and the letter C denotes the cubic symmetry of the crystal. 3C-SiC is the only possible cubic polytype. The wurtzite ABAB... stacking sequence is denoted as 2H-SiC, indicating its two-bilayer stacking periodicity and hexagonal symmetry . This periodicity doubles and triples in 4H- and 6H-SiC polytypes. The family of rhombohedral polytypes is labeled R, for example, 15R-SiC. | https://en.wikipedia.org/wiki/Polymorphs_of_silicon_carbide |
In mathematics , a polynomial Diophantine equation is an indeterminate polynomial equation for which one seeks solutions restricted to be polynomials in the indeterminate. A Diophantine equation , in general, is one where the solutions are restricted to some algebraic system, typically integers. (In another usage ) Diophantine refers to the Hellenistic mathematician of the 3rd century, Diophantus of Alexandria , who made initial studies of integer Diophantine equations.
An important type of polynomial Diophantine equations takes the form:
where a , b , and c are known polynomials, and we wish to solve for s and t .
A simple example (and a solution) is:
A necessary and sufficient condition for a polynomial Diophantine equation to have a solution is for c to be a multiple of the GCD of a and b . In the example above, the GCD of a and b was 1, so solutions would exist for any value of c.
Solutions to polynomial Diophantine equations are not unique. Any multiple of a b {\displaystyle ab} (say r a b {\displaystyle rab} ) can be used to transform s {\displaystyle s} and t {\displaystyle t} into another solution s ′ = s + r b {\displaystyle s'=s+rb} t ′ = t − r a {\displaystyle t'=t-ra} :
Some polynomial Diophantine equations can be solved using the extended Euclidean algorithm , which works as well with polynomials as it does with integers. | https://en.wikipedia.org/wiki/Polynomial_Diophantine_equation |
In signal processing, the polynomial Wigner–Ville distribution is a quasiprobability distribution that generalizes the Wigner distribution function . It was proposed by Boualem Boashash and Peter O'Shea in 1994.
Many signals in nature and in engineering applications can be modeled as z ( t ) = e j 2 π ϕ ( t ) {\displaystyle z(t)=e^{j2\pi \phi (t)}} , where ϕ ( t ) {\displaystyle \phi (t)} is a polynomial phase and j = − 1 {\displaystyle j={\sqrt {-1}}} .
For example, it is important to detect signals of an arbitrary high-order polynomial phase. However, the conventional Wigner–Ville distribution have the limitation being based on the second-order statistics. Hence, the polynomial Wigner–Ville distribution was proposed as a generalized form of the conventional Wigner–Ville distribution, which is able to deal with signals with nonlinear phase.
The polynomial Wigner–Ville distribution W z g ( t , f ) {\displaystyle W_{z}^{g}(t,f)} is defined as
where F τ → f {\displaystyle {\mathcal {F}}_{\tau \to f}} denotes the Fourier transform with respect to τ {\displaystyle \tau } , and K z g ( t , τ ) {\displaystyle K_{z}^{g}(t,\tau )} is the polynomial kernel given by
where z ( t ) {\displaystyle z(t)} is the input signal and q {\displaystyle q} is an even number.
The above expression for the kernel may be rewritten in symmetric form as
The discrete-time version of the polynomial Wigner–Ville distribution is given by the discrete Fourier transform of
where n = t f s , m = τ f s , {\displaystyle n=t{f}_{s},m={\tau }{f}_{s},} and f s {\displaystyle f_{s}} is the sampling frequency.
The conventional Wigner–Ville distribution is a special case of the polynomial Wigner–Ville distribution with q = 2 , b − 1 = − 1 , b 1 = 1 , b 0 = 0 , c − 1 = − 1 2 , c 0 = 0 , c 1 = 1 2 {\displaystyle q=2,b_{-1}=-1,b_{1}=1,b_{0}=0,c_{-1}=-{\frac {1}{2}},c_{0}=0,c_{1}={\frac {1}{2}}}
One of the simplest generalizations of the usual Wigner–Ville distribution kernel can be achieved by taking q = 4 {\displaystyle q=4} . The set of coefficients b k {\displaystyle b_{k}} and c k {\displaystyle c_{k}} must be found to completely specify the new kernel. For example, we set
The resulting discrete-time kernel is then given by
Given a signal z ( t ) = e j 2 π ϕ ( t ) {\displaystyle z(t)=e^{j2\pi \phi (t)}} , where ϕ ( t ) = ∑ i = 0 p a i t i {\displaystyle \phi (t)=\sum _{i=0}^{p}a_{i}t^{i}} is a polynomial function, its instantaneous frequency (IF) is ϕ ′ ( t ) = ∑ i = 1 p i a i t i − 1 {\displaystyle \phi '(t)=\sum _{i=1}^{p}ia_{i}t^{i-1}} .
For a practical polynomial kernel K z g ( t , τ ) {\displaystyle K_{z}^{g}(t,\tau )} , the set of coefficients q , b k {\displaystyle q,b_{k}} and c k {\displaystyle c_{k}} should be chosen properly such that
Nonlinear FM signals are common both in nature and in engineering applications. For example, the sonar system of some bats use hyperbolic FM and quadratic FM signals for echo location. In radar, certain pulse-compression schemes employ linear FM and quadratic signals. The Wigner–Ville distribution has optimal concentration in the time-frequency plane for linear frequency modulated signals. However, for nonlinear frequency modulated signals, optimal concentration is not obtained, and smeared spectral representations result. The polynomial Wigner–Ville distribution can be designed to cope with such problem. | https://en.wikipedia.org/wiki/Polynomial_Wigner–Ville_distribution |
In coding theory , a polynomial code is a type of linear code whose set of valid code words consists of those polynomials (usually of some fixed length) that are divisible by a given fixed polynomial (of shorter length, called the generator polynomial ).
Fix a finite field G F ( q ) {\displaystyle GF(q)} , whose elements we call symbols . For the purposes of constructing polynomial codes, we identify a string of n {\displaystyle n} symbols a n − 1 … a 0 {\displaystyle a_{n-1}\ldots a_{0}} with the polynomial
Fix integers m ≤ n {\displaystyle m\leq n} and let g ( x ) {\displaystyle g(x)} be some fixed polynomial of degree m {\displaystyle m} , called the generator polynomial . The polynomial code generated by g ( x ) {\displaystyle g(x)} is the code whose code words are precisely the polynomials of degree less than n {\displaystyle n} that are divisible (without remainder) by g ( x ) {\displaystyle g(x)} .
Consider the polynomial code over G F ( 2 ) = { 0 , 1 } {\displaystyle GF(2)=\{0,1\}} with n = 5 {\displaystyle n=5} , m = 2 {\displaystyle m=2} , and generator polynomial g ( x ) = x 2 + x + 1 {\displaystyle g(x)=x^{2}+x+1} . This code consists of the following code words:
Or written explicitly:
Since the polynomial code is defined over the Binary Galois Field G F ( 2 ) = { 0 , 1 } {\displaystyle GF(2)=\{0,1\}} , polynomial elements are represented as a modulo -2 sum and the final polynomials are:
Equivalently, expressed as strings of binary digits, the codewords are:
This, as every polynomial code, is indeed a linear code , i.e., linear combinations of code words are again code words. In a case like this where the field is GF(2), linear combinations are found by taking the XOR of the codewords expressed in binary form (e.g. 00111 XOR 10010 = 10101).
In a polynomial code over G F ( q ) {\displaystyle GF(q)} with code length n {\displaystyle n} and generator polynomial g ( x ) {\displaystyle g(x)} of degree m {\displaystyle m} ,
there will be exactly q n − m {\displaystyle q^{n-m}} code words. Indeed, by definition, p ( x ) {\displaystyle p(x)} is a code word if and only if it is of the form p ( x ) = g ( x ) ⋅ q ( x ) {\displaystyle p(x)=g(x)\cdot q(x)} , where q ( x ) {\displaystyle q(x)} (the quotient ) is of degree less than n − m {\displaystyle n-m} . Since there are q n − m {\displaystyle q^{n-m}} such quotients available, there are the same number of possible code words.
Plain (unencoded) data words should therefore be of length n − m {\displaystyle n-m}
Some authors, such as (Lidl & Pilz, 1999), only discuss the mapping q ( x ) ↦ g ( x ) ⋅ q ( x ) {\displaystyle q(x)\mapsto g(x)\cdot q(x)} as the assignment from data words to code words. However, this has the disadvantage that the data word does not appear as part of the code word.
Instead, the following method is often used to create a systematic code : given a data word d ( x ) {\displaystyle d(x)} of length n − m {\displaystyle n-m} , first multiply d ( x ) {\displaystyle d(x)} by x m {\displaystyle x^{m}} , which has the effect of shifting d ( x ) {\displaystyle d(x)} by m {\displaystyle m} places to the left. In general, x m d ( x ) {\displaystyle x^{m}d(x)} will not be divisible by g ( x ) {\displaystyle g(x)} , i.e., it will not be a valid code word. However, there is a unique code word that can be obtained by adjusting the rightmost m {\displaystyle m} symbols of x m d ( x ) {\displaystyle x^{m}d(x)} .
To calculate it, compute the remainder of dividing x m d ( x ) {\displaystyle x^{m}d(x)} by g ( x ) {\displaystyle g(x)} :
where r ( x ) {\displaystyle r(x)} is of degree less than m {\displaystyle m} . The code word corresponding to the data word d ( x ) {\displaystyle d(x)} is then defined to be
Note the following properties:
For the above code with n = 5 {\displaystyle n=5} , m = 2 {\displaystyle m=2} , and generator polynomial g ( x ) = x 2 + x + 1 {\displaystyle g(x)=x^{2}+x+1} , we obtain the following assignment from data words to codewords:
An erroneous message can be detected in a straightforward way through polynomial division by the generator polynomial resulting in a non-zero remainder.
Assuming that the code word is free of errors, a systematic code can be decoded simply by stripping away the m {\displaystyle m} checksum digits.
If there are errors, then error correction should be performed before decoding. Efficient decoding algorithms exist for specific polynomial codes, such as BCH codes .
As for all digital codes, the error detection and correction abilities of polynomial codes are determined by the minimum Hamming distance of the code. Since polynomial codes are linear codes, the minimum Hamming distance is equal to the minimum weight of any non-zero codeword. In the example above, the minimum Hamming distance is 2, since 01001 is a codeword, and there is no nonzero codeword with only one bit set.
More specific properties of a polynomial code often depend on particular algebraic properties of its generator polynomial. Here are some examples of such properties:
The algebraic nature of polynomial codes, with cleverly chosen generator polynomials, can also often be exploited to find efficient error correction algorithms. This is the case for BCH codes . | https://en.wikipedia.org/wiki/Polynomial_code |
In algebra , the ring of polynomial differential forms on the standard n -simplex is the differential graded algebra : [ 1 ]
Varying n , it determines the simplicial commutative dg algebra :
(each u : [ n ] → [ m ] {\displaystyle u:[n]\to [m]} induces the map Ω poly ∗ ( [ m ] ) → Ω poly ∗ ( [ n ] ) , t i ↦ ∑ u ( j ) = i t j {\displaystyle \Omega _{\text{poly}}^{*}([m])\to \Omega _{\text{poly}}^{*}([n]),t_{i}\mapsto \sum _{u(j)=i}t_{j}} ).
This algebra -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Polynomial_differential_form |
In mathematics and computer science , polynomial evaluation refers to computation of the value of a polynomial when its indeterminates are substituted for some values. In other words, evaluating the polynomial P ( x 1 , x 2 ) = 2 x 1 x 2 + x 1 3 + 4 {\displaystyle P(x_{1},x_{2})=2x_{1}x_{2}+x_{1}^{3}+4} at x 1 = 2 , x 2 = 3 {\displaystyle x_{1}=2,x_{2}=3} consists of computing P ( 2 , 3 ) = 2 ⋅ 2 ⋅ 3 + 2 3 + 4 = 24. {\displaystyle P(2,3)=2\cdot 2\cdot 3+2^{3}+4=24.} See also Polynomial ring § Polynomial evaluation
For evaluating the univariate polynomial a n x n + a n − 1 x n − 1 + ⋯ + a 0 , {\displaystyle a_{n}x^{n}+a_{n-1}x^{n-1}+\cdots +a_{0},} the most naive method would use n {\displaystyle n} multiplications to compute a n x n {\displaystyle a_{n}x^{n}} , use n − 1 {\displaystyle n-1} multiplications to compute a n − 1 x n − 1 {\displaystyle a_{n-1}x^{n-1}} and so on for a total of n ( n + 1 ) 2 {\displaystyle {\tfrac {n(n+1)}{2}}} multiplications and n {\displaystyle n} additions.
Using better methods, such as Horner's rule , this can be reduced to n {\displaystyle n} multiplications and n {\displaystyle n} additions. If some preprocessing is allowed, even more savings are possible.
This problem arises frequently in practice. In computational geometry , polynomials are used to compute function approximations using Taylor polynomials . In cryptography and hash tables , polynomials are used to compute k -independent hashing .
In the former case, polynomials are evaluated using floating-point arithmetic , which is not exact. Thus different schemes for the evaluation will, in general, give slightly different answers. In the latter case, the polynomials are usually evaluated in a finite field , in which case the answers are always exact.
Horner's method evaluates a polynomial using repeated bracketing: a 0 + a 1 x + a 2 x 2 + a 3 x 3 + ⋯ + a n x n = a 0 + x ( a 1 + x ( a 2 + x ( a 3 + ⋯ + x ( a n − 1 + x a n ) ⋯ ) ) ) . {\displaystyle {\begin{aligned}a_{0}+&a_{1}x+a_{2}x^{2}+a_{3}x^{3}+\cdots +a_{n}x^{n}\\&=a_{0}+x{\bigg (}a_{1}+x{\Big (}a_{2}+x{\big (}a_{3}+\cdots +x(a_{n-1}+x\,a_{n})\cdots {\big )}{\Big )}{\bigg )}.\end{aligned}}} This method reduces the number of multiplications and additions to just n {\displaystyle n}
Horner's method is so common that a computer instruction " multiply–accumulate operation " has been added to many computer processors, which allow doing the addition and multiplication operations in one combined step.
If the polynomial is multivariate, Horner's rule can be applied recursively over some ordering of the variables.
E.g.
can be written as
An efficient version of this approach was described by Carnicer and Gasca. [ 1 ]
While it's not possible to do less computation than Horner's rule (without preprocessing), on modern computers the order of evaluation can matter a lot for the computational efficiency.
A method known as Estrin's scheme computes a (single variate) polynomial in a tree like pattern:
P ( x ) = ( a 0 + a 1 x ) + ( a 2 + a 3 x ) x 2 + ( ( a 4 + a 5 x ) + ( a 6 + a 7 x ) x 2 ) x 4 . {\displaystyle {\begin{aligned}P(x)=(a_{0}+a_{1}x)+(a_{2}+a_{3}x)x^{2}+((a_{4}+a_{5}x)+(a_{6}+a_{7}x)x^{2})x^{4}.\end{aligned}}}
Combined by Exponentiation by squaring , this allows parallelizing the computation.
Arbitrary polynomials can be evaluated with fewer
operations than Horner's rule requires if we first "preprocess"
the coefficients a n , … , a 0 {\displaystyle a_{n},\dots ,a_{0}} .
An example was first given by Motzkin [ 2 ] who noted that
can be written as
where the values β 0 , … , β 3 {\displaystyle \beta _{0},\dots ,\beta _{3}} are computed in advanced, based on a 0 , … , a 3 {\displaystyle a_{0},\dots ,a_{3}} .
Motzkin's method uses just 3 multiplications compared to Horner's 4.
The values for each β i {\displaystyle \beta _{i}} can be easily computed by expanding P ( x ) {\displaystyle P(x)} and equating the coefficients:
To compute the Taylor expansion exp ( x ) ≈ 1 + x + x 2 / 2 + x 3 / 6 + x 4 / 24 {\displaystyle \exp(x)\approx 1+x+x^{2}/2+x^{3}/6+x^{4}/24} ,
we can upscale by a factor 24, apply the above steps, and scale back down.
That gives us the three multiplication computation
Improving over the equivalent Horner form (that is P ( x ) = 1 + x ( 1 + x ( 1 / 2 + x ( 1 / 6 + x / 24 ) ) ) {\displaystyle P(x)=1+x(1+x(1/2+x(1/6+x/24)))} ) by 1 multiplication.
Some general methods include the Knuth–Eve algorithm and the Rabin–Winograd algorithm . [ 3 ]
Evaluation of a degree-n polynomial P ( x ) {\displaystyle P(x)} at multiple points x 1 , … , x m {\displaystyle x_{1},\dots ,x_{m}} can be done with m n {\displaystyle mn} multiplications by using Horner's method m {\displaystyle m} times. Using the above preprocessing approach, this can be reduced by a factor of two; that is, to m n / 2 {\displaystyle mn/2} multiplications.
However, it is possible to do better and reduce the time requirement to just O ( ( n + m ) log 2 ( n + m ) ) {\displaystyle O{\big (}(n+m)\log ^{2}(n+m){\big )}} . [ 4 ] The idea is to define two polynomials that are zero in respectively the first and second half of the points: m 0 ( x ) = ( x − x 1 ) ⋯ ( x − x n / 2 ) {\displaystyle m_{0}(x)=(x-x_{1})\cdots (x-x_{n/2})} and m 1 ( x ) = ( x − x n / 2 + 1 ) ⋯ ( x − x n ) {\displaystyle m_{1}(x)=(x-x_{n/2+1})\cdots (x-x_{n})} .
We then compute R 0 = P mod m 0 {\displaystyle R_{0}=P{\bmod {m}}_{0}} and R 1 = P mod m 1 {\displaystyle R_{1}=P{\bmod {m}}_{1}} using the Polynomial remainder theorem , which can be done in O ( n log n ) {\displaystyle O(n\log n)} time using a fast Fourier transform .
This means P ( x ) = Q ( x ) m 0 ( x ) + R 0 ( x ) {\displaystyle P(x)=Q(x)m_{0}(x)+R_{0}(x)} and P ( x ) = Q ( x ) m 1 ( x ) + R 1 ( x ) {\displaystyle P(x)=Q(x)m_{1}(x)+R_{1}(x)} by construction, where R 0 {\displaystyle R_{0}} and R 1 {\displaystyle R_{1}} are polynomials of degree at most n / 2 {\displaystyle n/2} .
Because of how m 0 {\displaystyle m_{0}} and m 1 {\displaystyle m_{1}} were defined, we have
Thus to compute P {\displaystyle P} on all n {\displaystyle n} of the x i {\displaystyle x_{i}} , it suffices to compute the smaller polynomials R 0 {\displaystyle R_{0}} and R 1 {\displaystyle R_{1}} on each half of the points.
This gives us a divide-and-conquer algorithm with T ( n ) = 2 T ( n / 2 ) + n log n {\displaystyle T(n)=2T(n/2)+n\log n} , which implies T ( n ) = O ( n ( log n ) 2 ) {\displaystyle T(n)=O(n(\log n)^{2})} by the master theorem .
In the case where the points in which we wish to evaluate the polynomials have some structure, simpler methods exist.
For example, Knuth [ 5 ] section 4.6.4
gives a method for tabulating polynomial values of the type
In the case where x 1 , … , x m {\displaystyle x_{1},\dots ,x_{m}} are not known in advance,
Kedlaya and Umans [ 6 ] gave a data structure for evaluating polynomials over a finite field of size F q {\displaystyle F_{q}} in time ( log n ) O ( 1 ) ( log 2 q ) 1 + o ( 1 ) {\displaystyle (\log n)^{O(1)}(\log _{2}q)^{1+o(1)}} per evaluation after some initial preprocessing.
This was shown by Larsen [ 7 ] to be essentially optimal.
The idea is to transform P ( x ) {\displaystyle P(x)} of degree n {\displaystyle n} into a multivariate polynomial f ( x 1 , x 2 , … , x m ) {\displaystyle f(x_{1},x_{2},\dots ,x_{m})} , such that P ( x ) = f ( x , x d , x d 2 , … , x d m ) {\displaystyle P(x)=f(x,x^{d},x^{d^{2}},\dots ,x^{d^{m}})} and the individual degrees of f {\displaystyle f} is at most d {\displaystyle d} .
Since this is over mod q {\displaystyle {\bmod {q}}} , the largest value f {\displaystyle f} can take (over Z {\displaystyle \mathbb {Z} } ) is M = d m ( q − 1 ) d m {\displaystyle M=d^{m}(q-1)^{dm}} .
Using the Chinese remainder theorem , it suffices to evaluate f {\displaystyle f} modulo different primes p 1 , … , p ℓ {\displaystyle p_{1},\dots ,p_{\ell }} with a product at least M {\displaystyle M} .
Each prime can be taken to be roughly log M = O ( d m log q ) {\displaystyle \log M=O(dm\log q)} , and the number of primes needed, ℓ {\displaystyle \ell } , is roughly the same.
Doing this process recursively, we can get the primes as small as log log q {\displaystyle \log \log q} .
That means we can compute and store f {\displaystyle f} on all the possible values in T = ( log log q ) m {\displaystyle T=(\log \log q)^{m}} time and space.
If we take d = log q {\displaystyle d=\log q} , we get m = log n log log q {\displaystyle m={\tfrac {\log n}{\log \log q}}} , so the time/space requirement is just n log log q log log log q . {\displaystyle n^{\frac {\log \log q}{\log \log \log q}}.}
Kedlaya and Umans further show how to combine this preprocessing with fast (FFT) multipoint evaluation.
This allows optimal algorithms for many important algebraic problems, such as polynomial modular composition .
While general polynomials require Ω ( n ) {\displaystyle \Omega (n)} operations to evaluate, some polynomials can be computed much faster.
For example, the polynomial P ( x ) = x 2 + 2 x + 1 {\displaystyle P(x)=x^{2}+2x+1} can be computed using just one multiplication and one addition since P ( x ) = ( x + 1 ) 2 {\displaystyle P(x)=(x+1)^{2}}
A particularly interesting type of polynomial is powers like x n {\displaystyle x^{n}} .
Such polynomials can always be computed in O ( log n ) {\displaystyle O(\log n)} operations.
Suppose, for example, that we need to compute x 16 {\displaystyle x^{16}} ; we could simply start with x {\displaystyle x} and multiply by x {\displaystyle x} to get x 2 {\displaystyle x^{2}} .
We can then multiply that by itself to get x 4 {\displaystyle x^{4}} and so on to get x 8 {\displaystyle x^{8}} and x 16 {\displaystyle x^{16}} in just four multiplications.
Other powers like x 5 {\displaystyle x^{5}} can similarly be computed efficiently by first computing x 4 {\displaystyle x^{4}} by 2 multiplications and then multiplying by x {\displaystyle x} .
The most efficient way to compute a given power x n {\displaystyle x^{n}} is provided by addition-chain exponentiation . However, this requires designing a specific algorithm for each exponent, and the computation needed for designing these algorithms are difficult ( NP-complete [ 8 ] ), so exponentiation by squaring is generally preferred for effective computations.
Often polynomials show up in a different form than the well known a n x n + ⋯ + a 1 x + a 0 {\displaystyle a_{n}x^{n}+\dots +a_{1}x+a_{0}} .
For polynomials in Chebyshev form we can use Clenshaw algorithm .
For polynomials in Bézier form we can use De Casteljau's algorithm ,
and for B-splines there is De Boor's algorithm .
The fact that some polynomials can be computed significantly faster than "general polynomials" suggests the question: Can we give an example of a simple polynomial that cannot be computed in time much smaller than its degree? Volker Strassen has shown [ 9 ] that the polynomial
cannot be evaluated with less than 1 2 n − 2 {\displaystyle {\tfrac {1}{2}}n-2} multiplications and n − 4 {\displaystyle n-4} additions.
At least this bound holds if only operations of those types are allowed, giving rise to a so-called "polynomial chain of length < n 2 / log n {\displaystyle <n^{2}/\log n} ".
The polynomial given by Strassen has very large coefficients, but by probabilistic methods, one can show there must exist even polynomials with coefficients just 0's and 1's such that the evaluation requires at least Ω ( n / log n ) {\displaystyle \Omega (n/\log n)} multiplications. [ 10 ]
For other simple polynomials, the complexity is unknown.
The polynomial ( x + 1 ) ( x + 2 ) ⋯ ( x + n ) {\displaystyle (x+1)(x+2)\cdots (x+n)} is conjectured to not be computable in time ( log n ) c {\displaystyle (\log n)^{c}} for any c {\displaystyle c} .
This is supported by the fact that, if it can be computed fast, then integer factorization can be computed in polynomial time, breaking the RSA cryptosystem . [ 11 ]
Sometimes the computational cost of scalar multiplications (like a x {\displaystyle ax} ) is less than the computational cost of "non scalar" multiplications (like x 2 {\displaystyle x^{2}} ).
The typical example of this is matrices.
If M {\displaystyle M} is an m × m {\displaystyle m\times m} matrix, a scalar multiplication a M {\displaystyle aM} takes about m 2 {\displaystyle m^{2}} arithmetic operations, while computing M 2 {\displaystyle M^{2}} takes about m 3 {\displaystyle m^{3}} (or m 2.3 {\displaystyle m^{2.3}} using fast matrix multiplication ).
Matrix polynomials are important for example for computing the Matrix Exponential .
Paterson and Stockmeyer [ 12 ] showed how to compute a degree n {\displaystyle n} polynomial using only O ( n ) {\displaystyle O({\sqrt {n}})} non scalar multiplications and O ( n ) {\displaystyle O(n)} scalar multiplications.
Thus a matrix polynomial of degree n can be evaluated in O ( m 2.3 n + m 2 n ) {\displaystyle O(m^{2.3}{\sqrt {n}}+m^{2}n)} time. If m = n {\displaystyle m=n} this is O ( m 3 ) {\displaystyle O(m^{3})} , as fast as one matrix multiplication with the standard algorithm.
This method works as follows: For a polynomial
let k be the least integer not smaller than n . {\displaystyle {\sqrt {n}}.} The powers M , M 2 , … , M k {\displaystyle M,M^{2},\dots ,M^{k}} are computed with k {\displaystyle k} matrix multiplications, and M 2 k , M 3 k , … , M k 2 − k {\displaystyle M^{2k},M^{3k},\dots ,M^{k^{2}-k}} are then computed by repeated multiplication by M k . {\displaystyle M^{k}.} Now,
where a i = 0 {\displaystyle a_{i}=0} for i ≥ n .
This requires just k {\displaystyle k} more non-scalar multiplications.
We can write this succinctly using the Kronecker product :
The direct application of this method uses 2 n {\displaystyle 2{\sqrt {n}}} non-scalar multiplications, but combining it with Evaluation with preprocessing , Paterson and Stockmeyer show you can reduce this to 2 n {\displaystyle {\sqrt {2n}}} .
Methods based on matrix polynomial multiplications and additions have been proposed allowing to save nonscalar matrix multiplications with respect to the Paterson-Stockmeyer method. [ 13 ] | https://en.wikipedia.org/wiki/Polynomial_evaluation |
In mathematics , an expansion of a product of sums expresses it as a sum of products by using the fact that multiplication distributes over addition. Expansion of a polynomial expression can be obtained by repeatedly replacing subexpressions that multiply two other subexpressions, at least one of which is an addition, by the equivalent sum of products, continuing until the expression becomes a sum of (repeated) products. During the expansion, simplifications such as grouping of like terms or cancellations of terms may also be applied. Instead of multiplications, the expansion steps could also involve replacing powers of a sum of terms by the equivalent expression obtained from the binomial formula ; this is a shortened form of what would happen if the power were treated as a repeated multiplication, and expanded repeatedly. It is customary to reintroduce powers in the final result when terms involve products of identical symbols.
Simple examples of polynomial expansions are the well known rules
when used from left to right. A more general single-step expansion will introduce all products of a term of one of the sums being multiplied with a term of the other:
An expansion which involves multiple nested rewrite steps is that of working out a Horner scheme to the (expanded) polynomial it defines, for instance
The opposite process of trying to write an expanded polynomial as a product is called polynomial factorization .
To multiply two factors, each term of the first factor must be multiplied by each term of the other factor. If both factors are binomials , the FOIL rule can be used, which stands for " F irst O uter I nner L ast," referring to the terms that are multiplied together. For example, expanding
yields
When expanding ( x + y ) n {\displaystyle (x+y)^{n}} , a special relationship exists between the coefficients of the terms when written in order of descending powers of x and ascending powers of y . The coefficients will be the numbers in the ( n + 1)th row of Pascal's triangle (since Pascal's triangle starts with row and column number of 0). [ citation needed ]
For example, when expanding ( x + y ) 6 {\displaystyle (x+y)^{6}} , the following is obtained:
Discussion
Online tools | https://en.wikipedia.org/wiki/Polynomial_expansion |
In algebra, the greatest common divisor (frequently abbreviated as GCD) of two polynomials is a polynomial, of the highest possible degree, that is a factor of both the two original polynomials. This concept is analogous to the greatest common divisor of two integers.
In the important case of univariate polynomials over a field the polynomial GCD may be computed, like for the integer GCD, by the Euclidean algorithm using long division . The polynomial GCD is defined only up to the multiplication by an invertible constant.
The similarity between the integer GCD and the polynomial GCD allows extending to univariate polynomials all the properties that may be deduced from the Euclidean algorithm and Euclidean division . Moreover, the polynomial GCD has specific properties that make it a fundamental notion in various areas of algebra. Typically, the roots of the GCD of two polynomials are the common roots of the two polynomials, and this provides information on the roots without computing them. For example, the multiple roots of a polynomial are the roots of the GCD of the polynomial and its derivative, and further GCD computations allow computing the square-free factorization of the polynomial, which provides polynomials whose roots are the roots of a given multiplicity of the original polynomial.
The greatest common divisor may be defined and exists, more generally, for multivariate polynomials over a field or the ring of integers, and also over a unique factorization domain . There exist algorithms to compute them as soon as one has a GCD algorithm in the ring of coefficients. These algorithms proceed by a recursion on the number of variables to reduce the problem to a variant of the Euclidean algorithm. They are a fundamental tool in computer algebra , because computer algebra systems use them systematically to simplify fractions. Conversely, most of the modern theory of polynomial GCD has been developed to satisfy the need for efficiency of computer algebra systems.
Let p and q be polynomials with coefficients in an integral domain F , typically a field or the integers.
A greatest common divisor of p and q is a polynomial d that divides p and q , and such that every common divisor of p and q also divides d . Every pair of polynomials (not both zero) has a GCD if and only if F is a unique factorization domain .
If F is a field and p and q are not both zero, a polynomial d is a greatest common divisor if and only if it divides both p and q , and it has the greatest degree among the polynomials having this property. If p = q = 0 , the GCD is 0. However, some authors consider that it is not defined in this case.
The greatest common divisor of p and q is usually denoted " gcd( p , q ) ".
The greatest common divisor is not unique: if d is a GCD of p and q , then the polynomial f is another GCD if and only if there is an invertible element u of F such that f = u d {\displaystyle f=ud} and d = u − 1 f . {\displaystyle d=u^{-1}f.} In other words, the GCD is unique up to the multiplication by an invertible constant.
In the case of the integers, this indetermination has been settled by choosing, as the GCD, the unique one which is positive (there is another one, which is its opposite). With this convention, the GCD of two integers is also the greatest (for the usual ordering) common divisor. However, since there is no natural total order for polynomials over an integral domain, one cannot proceed in the same way here. For univariate polynomials over a field, one can additionally require the GCD to be monic (that is to have 1 as its coefficient of the highest degree), but in more general cases there is no general convention. Therefore, equalities like d = gcd( p , q ) or gcd( p , q ) = gcd( r , s ) are common abuses of notation which should be read " d is a GCD of p and q " and " p and q have the same set of GCDs as r and s ". In particular, gcd( p , q ) = 1 means that the invertible constants are the only common divisors. In this case, by analogy with the integer case, one says that p and q are coprime polynomials .
There are several ways to find the greatest common divisor of two polynomials. Two of them are:
To find the GCD of two polynomials using factoring, simply factor the two polynomials completely. Then, take the product of all common factors. At this stage, we do not necessarily have a monic polynomial, so finally multiply this by a constant to make it a monic polynomial. This will be the GCD of the two polynomials as it includes all common divisors and is monic.
Example one: Find the GCD of x 2 + 7 x + 6 and x 2 − 5 x − 6 .
Thus, their GCD is x + 1 .
Factoring polynomials can be difficult, especially if the polynomials have a large degree. The Euclidean algorithm is a method that works for any pair of polynomials. It makes repeated use of Euclidean division . When using this algorithm on two numbers, the size of the numbers decreases at each stage. With polynomials, the degree of the polynomials decreases at each stage. The last nonzero remainder, made monic if necessary, is the GCD of the two polynomials.
More specifically, for finding the gcd of two polynomials a ( x ) and b ( x ) , one can suppose b ≠ 0 (otherwise, the GCD is a ( x ) ), and deg ( b ( x ) ) ≤ deg ( a ( x ) ) . {\displaystyle \deg(b(x))\leq \deg(a(x))\,.}
The Euclidean division provides two polynomials q ( x ) , the quotient and r ( x ) , the remainder such that a ( x ) = q 0 ( x ) b ( x ) + r 0 ( x ) and deg ( r 0 ( x ) ) < deg ( b ( x ) ) {\displaystyle a(x)=q_{0}(x)b(x)+r_{0}(x)\quad {\text{and}}\quad \deg(r_{0}(x))<\deg(b(x))}
A polynomial g ( x ) divides both a ( x ) and b ( x ) if and only if it divides both b ( x ) and r 0 ( x ) . Thus gcd ( a ( x ) , b ( x ) ) = gcd ( b ( x ) , r 0 ( x ) ) . {\displaystyle \gcd(a(x),b(x))=\gcd(b(x),r_{0}(x)).} Setting a 1 ( x ) = b ( x ) , b 1 ( x ) = r 0 ( x ) , {\displaystyle a_{1}(x)=b(x),b_{1}(x)=r_{0}(x),} one can repeat the Euclidean division to get new polynomials q 1 ( x ), r 1 ( x ), a 2 ( x ), b 2 ( x ) and so on. At each stage we have deg ( a k + 1 ) + deg ( b k + 1 ) < deg ( a k ) + deg ( b k ) , {\displaystyle \deg(a_{k+1})+\deg(b_{k+1})<\deg(a_{k})+\deg(b_{k}),} so the sequence will eventually reach a point at which b N ( x ) = 0 {\displaystyle b_{N}(x)=0} and one has got the GCD: gcd ( a , b ) = gcd ( a 1 , b 1 ) = ⋯ = gcd ( a N , 0 ) = a N . {\displaystyle \gcd(a,b)=\gcd(a_{1},b_{1})=\cdots =\gcd(a_{N},0)=a_{N}.}
Example: finding the GCD of x 2 + 7 x + 6 and x 2 − 5 x − 6 :
Since 12 x + 12 is the last nonzero remainder, it is a GCD of the original polynomials, and the monic GCD is x + 1 .
In this example, it is not difficult to avoid introducing denominators by factoring out 12 before the second step. This can always be done by using pseudo-remainder sequences , but, without care, this may introduce very large integers during the computation. Therefore, for computer computation, other algorithms are used, that are described below.
This method works only if one can test the equality to zero of the coefficients that occur during the computation. So, in practice, the coefficients must be integers , rational numbers , elements of a finite field , or must belong to some finitely generated field extension of one of the preceding fields. If the coefficients are floating-point numbers that represent real numbers that are known only approximately, then one must know the degree of the GCD for having a well defined computation result (that is a numerically stable result; in this cases other techniques may be used, usually based on singular value decomposition .
The case of univariate polynomials over a field is especially important for several reasons. Firstly, it is the most elementary case and therefore appears in most first courses in algebra. Secondly, it is very similar to the case of the integers, and this analogy is the source of the notion of Euclidean domain . A third reason is that the theory and the algorithms for the multivariate case and for coefficients in a unique factorization domain are strongly based on this particular case. Last but not least, polynomial GCD algorithms and derived algorithms allow one to get useful information on the roots of a polynomial, without computing them.
Euclidean division of polynomials, which is used in Euclid's algorithm for computing GCDs, is very similar to Euclidean division of integers. Its existence is based on the following theorem: Given two univariate polynomials a and b ≠ 0 defined over a field, there exist two polynomials q (the quotient ) and r (the remainder ) which satisfy a = b q + r {\displaystyle a=bq+r} and deg ( r ) < deg ( b ) , {\displaystyle \deg(r)<\deg(b),} where " deg(...) " denotes the degree and the degree of the zero polynomial is defined as being negative. Moreover, q and r are uniquely defined by these relations.
The difference from Euclidean division of the integers is that, for the integers, the degree is replaced by the absolute value, and that to have uniqueness one has to suppose that r is non-negative. The rings for which such a theorem exists are called Euclidean domains .
Like for the integers, the Euclidean division of the polynomials may be computed by the long division algorithm. This algorithm is usually presented for paper-and-pencil computation, but it works well on computers when formalized as follows (note that the names of the variables correspond exactly to the regions of the paper sheet in a pencil-and-paper computation of long division). In the following computation "deg" stands for the degree of its argument (with the convention deg(0) < 0 ), and "lc" stands for the leading coefficient, the coefficient of the highest degree of the variable.
The proof of the validity of this algorithm relies on the fact that during the whole "while" loop, we have a = bq + r and deg( r ) is a non-negative integer that decreases at each iteration. Thus the proof of the validity of this algorithm also proves the validity of the Euclidean division.
As for the integers, the Euclidean division allows us to define Euclid's algorithm for computing GCDs.
Starting from two polynomials a and b , Euclid's algorithm consists of recursively replacing the pair ( a , b ) by ( b , rem( a , b )) (where " rem( a , b ) " denotes the remainder of the Euclidean division, computed by the algorithm of the preceding section), until b = 0. The GCD is the last non zero remainder.
Euclid's algorithm may be formalized in the recursive programming style as: gcd ( a , b ) := { a if b = 0 gcd ( b , rem ( a , b ) ) otherwise . {\displaystyle \gcd(a,b):={\begin{cases}a&{\text{if }}b=0\\\gcd(b,\operatorname {rem} (a,b))&{\text{otherwise}}.\end{cases}}}
In the imperative programming style, the same algorithm becomes, giving a name to each intermediate remainder:
The sequence of the degrees of the r i is strictly decreasing. Thus after, at most, deg( b ) steps, one get a null remainder, say r k . As ( a , b ) and ( b , rem( a , b )) have the same divisors, the set of the common divisors is not changed by Euclid's algorithm and thus all pairs ( r i , r i +1 ) have the same set of common divisors. The common divisors of a and b are thus the common divisors of r k −1 and 0. Thus r k −1 is a GCD of a and b .
This not only proves that Euclid's algorithm computes GCDs but also proves that GCDs exist.
Bézout's identity is a GCD related theorem, initially proved for the integers, which is valid for every principal ideal domain . In the case of the univariate polynomials over a field, it may be stated as follows.
and either u = 1, v = 0 , or u = 0, v = 1 , or
The interest of this result in the case of the polynomials is that there is an efficient algorithm to compute the polynomials u and v . This algorithm differs from Euclid's algorithm by a few more computations done at each iteration of the loop. It is therefore called extended GCD algorithm . Another difference with Euclid's algorithm is that it also uses the quotient, denoted "quo", of the Euclidean division instead of only the remainder. This algorithm works as follows.
The proof that the algorithm satisfies its output specification relies on the fact that, for every i we have r i = a s i + b t i {\displaystyle r_{i}=as_{i}+bt_{i}} s i t i + 1 − t i s i + 1 = s i t i − 1 − t i s i − 1 , {\displaystyle s_{i}t_{i+1}-t_{i}s_{i+1}=s_{i}t_{i-1}-t_{i}s_{i-1},} the latter equality implying s i t i + 1 − t i s i + 1 = ( − 1 ) i . {\displaystyle s_{i}t_{i+1}-t_{i}s_{i+1}=(-1)^{i}.} The assertion on the degrees follows from the fact that, at every iteration, the degrees of s i and t i increase at most as the degree of r i decreases.
An interesting feature of this algorithm is that, when the coefficients of Bezout's identity are needed, one gets for free the quotient of the input polynomials by their GCD.
An important application of the extended GCD algorithm is that it allows one to compute division in algebraic field extensions .
Let L an algebraic extension of a field K , generated by an element whose minimal polynomial f has degree n . The elements of L are usually represented by univariate polynomials over K of degree less than n .
The addition in L is simply the addition of polynomials: a + L b = a + K [ X ] b . {\displaystyle a+_{L}b=a+_{K[X]}b.}
The multiplication in L is the multiplication of polynomials followed by the division by f : a ⋅ L b = rem ( a . K [ X ] b , f ) . {\displaystyle a\cdot _{L}b=\operatorname {rem} (a._{K[X]}b,f).}
The inverse of a non zero element a of L is the coefficient u in Bézout's identity au + fv = 1 , which may be computed by extended GCD algorithm. (the GCD is 1 because the minimal polynomial f is irreducible). The degrees inequality in the specification of extended GCD algorithm shows that a further division by f is not needed to get deg( u ) < deg( f ).
In the case of univariate polynomials, there is a strong relationship between the greatest common divisors and resultants . More precisely, the resultant of two polynomials P , Q is a polynomial function of the coefficients of P and Q which has the value zero if and only if the GCD of P and Q is not constant.
The subresultants theory is a generalization of this property that allows characterizing generically the GCD of two polynomials, and the resultant is the 0-th subresultant polynomial. [ 1 ]
The i -th subresultant polynomial S i ( P , Q ) of two polynomials P and Q is a polynomial of degree at most i whose coefficients are polynomial functions of the coefficients of P and Q , and the i -th principal subresultant coefficient s i ( P , Q ) is the coefficient of degree i of S i ( P , Q ) . They have the property that the GCD of P and Q has a degree d if and only if s 0 ( P , Q ) = ⋯ = s d − 1 ( P , Q ) = 0 , s d ( P , Q ) ≠ 0. {\displaystyle s_{0}(P,Q)=\cdots =s_{d-1}(P,Q)=0\ ,s_{d}(P,Q)\neq 0.}
In this case, S d ( P , Q ) is a GCD of P and Q and S 0 ( P , Q ) = ⋯ = S d − 1 ( P , Q ) = 0. {\displaystyle S_{0}(P,Q)=\cdots =S_{d-1}(P,Q)=0.}
Every coefficient of the subresultant polynomials is defined as the determinant of a submatrix of the Sylvester matrix of P and Q . This implies that subresultants "specialize" well. More precisely, subresultants are defined for polynomials over any commutative ring R , and have the following property.
Let φ be a ring homomorphism of R into another commutative ring S . It extends to another homomorphism, denoted also φ between the polynomials rings over R and S . Then, if P and Q are univariate polynomials with coefficients in R such that deg ( P ) = deg ( φ ( P ) ) {\displaystyle \deg(P)=\deg(\varphi (P))} and deg ( Q ) = deg ( φ ( Q ) ) , {\displaystyle \deg(Q)=\deg(\varphi (Q)),} then the subresultant polynomials and the principal subresultant coefficients of φ ( P ) and φ ( Q ) are the image by φ of those of P and Q .
The subresultants have two important properties which make them fundamental for the computation on computers of the GCD of two polynomials with integer coefficients.
Firstly, their definition through determinants allows bounding, through Hadamard inequality , the size of the coefficients of the GCD.
Secondly, this bound and the property of good specialization allow computing the GCD of two polynomials with integer coefficients through modular computation and Chinese remainder theorem (see below ).
Let P = p 0 + p 1 X + ⋯ + p m X m , Q = q 0 + q 1 X + ⋯ + q n X n . {\displaystyle P=p_{0}+p_{1}X+\cdots +p_{m}X^{m},\quad Q=q_{0}+q_{1}X+\cdots +q_{n}X^{n}.} be two univariate polynomials with coefficients in a field K . Let us denote by P i {\displaystyle {\mathcal {P}}_{i}} the K vector space of dimension i of polynomials of degree less than i . For non-negative integer i such that i ≤ m and i ≤ n , let φ i : P n − i × P m − i → P m + n − i {\displaystyle \varphi _{i}:{\mathcal {P}}_{n-i}\times {\mathcal {P}}_{m-i}\rightarrow {\mathcal {P}}_{m+n-i}} be the linear map such that φ i ( A , B ) = A P + B Q . {\displaystyle \varphi _{i}(A,B)=AP+BQ.}
The resultant of P and Q is the determinant of the Sylvester matrix , which is the (square) matrix of φ 0 {\displaystyle \varphi _{0}} on the bases of the powers of X . Similarly, the i -subresultant polynomial is defined in term of determinants of submatrices of the matrix of φ i . {\displaystyle \varphi _{i}.}
Let us describe these matrices more precisely;
Let p i = 0 for i < 0 or i > m , and q i = 0 for i < 0 or i > n . The Sylvester matrix is the ( m + n ) × ( m + n ) -matrix such that the coefficient of the i -th row and the j -th column is p m + j − i for j ≤ n and q j − i for j > n : [ note 1 ] S = ( p m 0 ⋯ 0 q n 0 ⋯ 0 p m − 1 p m ⋯ 0 q n − 1 q n ⋯ 0 p m − 2 p m − 1 ⋱ 0 q n − 2 q n − 1 ⋱ 0 ⋮ ⋮ ⋱ p m ⋮ ⋮ ⋱ q n ⋮ ⋮ ⋯ p m − 1 ⋮ ⋮ ⋯ q n − 1 p 0 p 1 ⋯ ⋮ q 0 q 1 ⋯ ⋮ 0 p 0 ⋱ ⋮ 0 q 0 ⋱ ⋮ ⋮ ⋮ ⋱ p 1 ⋮ ⋮ ⋱ q 1 0 0 ⋯ p 0 0 0 ⋯ q 0 ) . {\displaystyle S={\begin{pmatrix}p_{m}&0&\cdots &0&q_{n}&0&\cdots &0\\p_{m-1}&p_{m}&\cdots &0&q_{n-1}&q_{n}&\cdots &0\\p_{m-2}&p_{m-1}&\ddots &0&q_{n-2}&q_{n-1}&\ddots &0\\\vdots &\vdots &\ddots &p_{m}&\vdots &\vdots &\ddots &q_{n}\\\vdots &\vdots &\cdots &p_{m-1}&\vdots &\vdots &\cdots &q_{n-1}\\p_{0}&p_{1}&\cdots &\vdots &q_{0}&q_{1}&\cdots &\vdots \\0&p_{0}&\ddots &\vdots &0&q_{0}&\ddots &\vdots \\\vdots &\vdots &\ddots &p_{1}&\vdots &\vdots &\ddots &q_{1}\\0&0&\cdots &p_{0}&0&0&\cdots &q_{0}\end{pmatrix}}.}
The matrix T i of φ i {\displaystyle \varphi _{i}} is the ( m + n − i ) × ( m + n − 2 i ) -submatrix of S which is obtained by removing the last i rows of zeros in the submatrix of the columns 1 to n − i and n + 1 to m + n − i of S (that is removing i columns in each block and the i last rows of zeros). The principal subresultant coefficient s i is the determinant of the m + n − 2 i first rows of T i .
Let V i be the ( m + n − 2 i ) × ( m + n − i ) matrix defined as follows. First we add ( i + 1) columns of zeros to the right of the ( m + n − 2 i − 1) × ( m + n − 2 i − 1) identity matrix . Then we border the bottom of the resulting matrix by a row consisting in ( m + n − i − 1) zeros followed by X i , X i −1 , ..., X , 1 : V i = ( 1 0 ⋯ 0 0 0 ⋯ 0 0 1 ⋯ 0 0 0 ⋯ 0 ⋮ ⋮ ⋱ ⋮ ⋮ ⋱ ⋮ 0 0 0 ⋯ 1 0 0 ⋯ 0 0 0 ⋯ 0 X i X i − 1 ⋯ 1 ) . {\displaystyle V_{i}={\begin{pmatrix}1&0&\cdots &0&0&0&\cdots &0\\0&1&\cdots &0&0&0&\cdots &0\\\vdots &\vdots &\ddots &\vdots &\vdots &\ddots &\vdots &0\\0&0&\cdots &1&0&0&\cdots &0\\0&0&\cdots &0&X^{i}&X^{i-1}&\cdots &1\end{pmatrix}}.}
With this notation, the i -th subresultant polynomial is the determinant of the matrix product V i T i . Its coefficient of degree j is the determinant of the square submatrix of T i consisting in its m + n − 2 i − 1 first rows and the ( m + n − i − j ) -th row.
It is not obvious that, as defined, the subresultants have the desired properties. Nevertheless, the proof is rather simple if the properties of linear algebra and those of polynomials are put together.
As defined, the columns of the matrix T i are the vectors of the coefficients of some polynomials belonging to the image of φ i {\displaystyle \varphi _{i}} . The definition of the i -th subresultant polynomial S i shows that the vector of its coefficients is a linear combination of these column vectors, and thus that S i belongs to the image of φ i . {\displaystyle \varphi _{i}.}
If the degree of the GCD is greater than i , then Bézout's identity shows that every non zero polynomial in the image of φ i {\displaystyle \varphi _{i}} has a degree larger than i . This implies that S i = 0 .
If, on the other hand, the degree of the GCD is i , then Bézout's identity again allows proving that the multiples of the GCD that have a degree lower than m + n − i are in the image of φ i {\displaystyle \varphi _{i}} . The vector space of these multiples has the dimension m + n − 2 i and has a base of polynomials of pairwise different degrees, not smaller than i . This implies that the submatrix of the m + n − 2 i first rows of the column echelon form of T i is the identity matrix and thus that s i is not 0. Thus S i is a polynomial in the image of φ i {\displaystyle \varphi _{i}} , which is a multiple of the GCD and has the same degree. It is thus a greatest common divisor.
Most root-finding algorithms behave badly with polynomials that have multiple roots . It is therefore useful to detect and remove them before calling a root-finding algorithm. A GCD computation allows detection of the existence of multiple roots, since the multiple roots of a polynomial are the roots of the GCD of the polynomial and its derivative .
After computing the GCD of the polynomial and its derivative, further GCD computations provide the complete square-free factorization of the polynomial, which is a factorization f = ∏ i = 1 deg ( f ) f i i {\displaystyle f=\prod _{i=1}^{\deg(f)}f_{i}^{i}} where, for each i , the polynomial f i either is 1 if f does not have any root of multiplicity i or is a square-free polynomial (that is a polynomial without multiple root) whose roots are exactly the roots of multiplicity i of f (see Yun's algorithm ).
Thus the square-free factorization reduces root-finding of a polynomial with multiple roots to root-finding of several square-free polynomials of lower degree. The square-free factorization is also the first step in most polynomial factorization algorithms.
The Sturm sequence of a polynomial with real coefficients is the sequence of the remainders provided by a variant of Euclid's algorithm applied to the polynomial and its derivative. For getting the Sturm sequence, one simply replaces the instruction r i + 1 := rem ( r i − 1 , r i ) {\displaystyle r_{i+1}:=\operatorname {rem} (r_{i-1},r_{i})} of Euclid's algorithm by r i + 1 := − rem ( r i − 1 , r i ) . {\displaystyle r_{i+1}:=-\operatorname {rem} (r_{i-1},r_{i}).}
Let V ( a ) be the number of changes of signs in the sequence, when evaluated at a point a . Sturm's theorem asserts that V ( a ) − V ( b ) is the number of real roots of the polynomial in the interval [ a , b ] . Thus the Sturm sequence allows computing the number of real roots in a given interval. By subdividing the interval until every subinterval contains at most one root, this provides an algorithm that locates the real roots in intervals of arbitrary small length.
In this section, we consider polynomials over a unique factorization domain R , typically the ring of the integers, and over its field of fractions F , typically the field of the rational numbers, and we denote R [ X ] and F [ X ] the rings of polynomials in a set of variables over these rings.
The content of a polynomial p ∈ R [ X ] , denoted " cont( p ) ", is the GCD of its coefficients. A polynomial q ∈ F [ X ] may be written q = p c {\displaystyle q={\frac {p}{c}}} where p ∈ R [ X ] and c ∈ R : it suffices to take for c a multiple of all denominators of the coefficients of q (for example their product) and p = cq . The content of q is defined as: cont ( q ) = cont ( p ) c . {\displaystyle \operatorname {cont} (q)={\frac {\operatorname {cont} (p)}{c}}.} In both cases, the content is defined up to the multiplication by a unit of R .
The primitive part of a polynomial in R [ X ] or F [ X ] is defined by primpart ( p ) = p cont ( p ) . {\displaystyle \operatorname {primpart} (p)={\frac {p}{\operatorname {cont} (p)}}.}
In both cases, it is a polynomial in R [ X ] that is primitive , which means that 1 is a GCD of its coefficients.
Thus every polynomial in R [ X ] or F [ X ] may be factorized as p = cont ( p ) primpart ( p ) , {\displaystyle p=\operatorname {cont} (p)\,\operatorname {primpart} (p),} and this factorization is unique up to the multiplication of the content by a unit of R and of the primitive part by the inverse of this unit.
Gauss's lemma implies that the product of two primitive polynomials is primitive. It follows that primpart ( p q ) = primpart ( p ) primpart ( q ) {\displaystyle \operatorname {primpart} (pq)=\operatorname {primpart} (p)\operatorname {primpart} (q)} and cont ( p q ) = cont ( p ) cont ( q ) . {\displaystyle \operatorname {cont} (pq)=\operatorname {cont} (p)\operatorname {cont} (q).}
The relations of the preceding section imply a strong relation between the GCD's in R [ X ] and in F [ X ] . To avoid ambiguities, the notation " gcd " will be indexed, in the following, by the ring in which the GCD is computed.
If q 1 and q 2 belong to F [ X ] , then primpart ( gcd F [ X ] ( q 1 , q 2 ) ) = gcd R [ X ] ( primpart ( q 1 ) , primpart ( q 2 ) ) . {\displaystyle \operatorname {primpart} (\gcd _{F[X]}(q_{1},q_{2}))=\gcd _{R[X]}(\operatorname {primpart} (q_{1}),\operatorname {primpart} (q_{2})).}
If p 1 and p 2 belong to R [ X ] , then gcd R [ X ] ( p 1 , p 2 ) = gcd R ( cont ( p 1 ) , cont ( p 2 ) ) gcd R [ X ] ( primpart ( p 1 ) , primpart ( p 2 ) ) , {\displaystyle \gcd _{R[X]}(p_{1},p_{2})=\gcd _{R}(\operatorname {cont} (p_{1}),\operatorname {cont} (p_{2}))\gcd _{R[X]}(\operatorname {primpart} (p_{1}),\operatorname {primpart} (p_{2})),} and gcd R [ X ] ( primpart ( p 1 ) , primpart ( p 2 ) ) = primpart ( gcd F [ X ] ( p 1 , p 2 ) ) . {\displaystyle \gcd _{R[X]}(\operatorname {primpart} (p_{1}),\operatorname {primpart} (p_{2}))=\operatorname {primpart} (\gcd _{F[X]}(p_{1},p_{2})).}
Thus the computation of polynomial GCD's is essentially the same problem over F [ X ] and over R [ X ] .
For univariate polynomials over the rational numbers, one may think that Euclid's algorithm is a convenient method for computing the GCD. However, it involves simplifying a large number of fractions of integers, and the resulting algorithm is not efficient. For this reason, methods have been designed to modify Euclid's algorithm for working only with polynomials over the integers. They consist of replacing the Euclidean division, which introduces fractions, by a so-called pseudo-division , and replacing the remainder sequence of the Euclid's algorithm by so-called pseudo-remainder sequences (see below ).
In the previous section we have seen that the GCD of polynomials in R [ X ] may be deduced from GCDs in R and in F [ X ] . A closer look on the proof shows that this allows us to prove the existence of GCDs in R [ X ] , if they exist in R and in F [ X ] . In particular, if GCDs exist in R , and if X is reduced to one variable, this proves that GCDs exist in R [ X ] (Euclid's algorithm proves the existence of GCDs in F [ X ] ).
A polynomial in n variables may be considered as a univariate polynomial over the ring of polynomials in ( n − 1 ) variables. Thus a recursion on the number of variables shows that if GCDs exist and may be computed in R , then they exist and may be computed in every multivariate polynomial ring over R . In particular, if R is either the ring of the integers or a field, then GCDs exist in R [ x 1 , ..., x n ] , and what precedes provides an algorithm to compute them.
The proof that a polynomial ring over a unique factorization domain is also a unique factorization domain is similar, but it does not provide an algorithm, because there is no general algorithm to factor univariate polynomials over a field (there are examples of fields for which there does not exist any factorization algorithm for the univariate polynomials).
In this section, we consider an integral domain Z (typically the ring Z of the integers) and its field of fractions Q (typically the field Q of the rational numbers). Given two polynomials A and B in the univariate polynomial ring Z [ X ] , the Euclidean division (over Q ) of A by B provides a quotient and a remainder which may not belong to Z [ X ] .
For, if one applies Euclid's algorithm to the following polynomials [ 2 ] X 8 + X 6 − 3 X 4 − 3 X 3 + 8 X 2 + 2 X − 5 {\displaystyle X^{8}+X^{6}-3X^{4}-3X^{3}+8X^{2}+2X-5} and 3 X 6 + 5 X 4 − 4 X 2 − 9 X + 21 , {\displaystyle 3X^{6}+5X^{4}-4X^{2}-9X+21,} the successive remainders of Euclid's algorithm are − 5 9 X 4 + 1 9 X 2 − 1 3 , − 117 25 X 2 − 9 X + 441 25 , 233150 19773 X − 102500 6591 , − 1288744821 543589225 . {\displaystyle {\begin{aligned}&-{\tfrac {5}{9}}X^{4}+{\tfrac {1}{9}}X^{2}-{\tfrac {1}{3}},\\&-{\tfrac {117}{25}}X^{2}-9X+{\tfrac {441}{25}},\\&{\tfrac {233150}{19773}}X-{\tfrac {102500}{6591}},\\&-{\tfrac {1288744821}{543589225}}.\end{aligned}}} One sees that, despite the small degree and the small size of the coefficients of the input polynomials, one has to manipulate and simplify integer fractions of rather large size.
The pseudo-division has been introduced to allow a variant of Euclid's algorithm for which all remainders belong to Z [ X ] .
If deg ( A ) = a {\displaystyle \deg(A)=a} and deg ( B ) = b {\displaystyle \deg(B)=b} and a ≥ b , the pseudo-remainder of the pseudo-division of A by B , denoted by prem( A , B ) is prem ( A , B ) = rem ( lc ( B ) a − b + 1 A , B ) , {\displaystyle \operatorname {prem} (A,B)=\operatorname {rem} (\operatorname {lc} (B)^{a-b+1}A,B),} where lc( B ) is the leading coefficient of B (the coefficient of X b ).
The pseudo-remainder of the pseudo-division of two polynomials in Z [ X ] belongs always to Z [ X ] .
A pseudo-remainder sequence is the sequence of the (pseudo) remainders r i obtained by replacing the instruction r i + 1 := rem ( r i − 1 , r i ) {\displaystyle r_{i+1}:=\operatorname {rem} (r_{i-1},r_{i})} of Euclid's algorithm by r i + 1 := prem ( r i − 1 , r i ) α , {\displaystyle r_{i+1}:={\frac {\operatorname {prem} (r_{i-1},r_{i})}{\alpha }},} where α is an element of Z that divides exactly every coefficient of the numerator. Different choices of α give different pseudo-remainder sequences, which are described in the next subsections.
As the common divisors of two polynomials are not changed if the polynomials are multiplied by invertible constants (in Q ), the last nonzero term in a pseudo-remainder sequence is a GCD (in Q [ X ] ) of the input polynomials. Therefore, pseudo-remainder sequences allows computing GCD's in Q [ X ] without introducing fractions in Q .
In some contexts, it is essential to control the sign of the leading coefficient of the pseudo-remainder. This is typically the case when computing resultants and subresultants , or for using Sturm's theorem . This control can be done either by replacing lc( B ) by its absolute value in the definition of the pseudo-remainder, or by controlling the sign of α (if α divides all coefficients of a remainder, the same is true for − α ). [ 1 ]
The simplest (to define) remainder sequence consists in taking always α = 1 . In practice, it is not interesting, as the size of the coefficients grows exponentially with the degree of the input polynomials. This appears clearly on the example of the preceding section, for which the successive pseudo-remainders are − 15 X 4 + 3 X 2 − 9 , {\displaystyle -15\,X^{4}+3\,X^{2}-9,} 15795 X 2 + 30375 X − 59535 , {\displaystyle 15795\,X^{2}+30375\,X-59535,} 1254542875143750 X − 1654608338437500 , {\displaystyle 1254542875143750\,X-1654608338437500,} 12593338795500743100931141992187500. {\displaystyle 12593338795500743100931141992187500.} The number of digits of the coefficients of the successive remainders is more than doubled at each iteration of the algorithm. This is typical behavior of the trivial pseudo-remainder sequences.
The primitive pseudo-remainder sequence consists in taking for α the content of the numerator. Thus all the r i are primitive polynomials.
The primitive pseudo-remainder sequence is the pseudo-remainder sequence, which generates the smallest coefficients. However it requires to compute a number of GCD's in Z , and therefore is not sufficiently efficient to be used in practice, especially when Z is itself a polynomial ring.
With the same input as in the preceding sections, the successive remainders, after division by their content are − 5 X 4 + X 2 − 3 , {\displaystyle -5\,X^{4}+X^{2}-3,} 13 X 2 + 25 X − 49 , {\displaystyle 13\,X^{2}+25\,X-49,} 4663 X − 6150 , {\displaystyle 4663\,X-6150,} 1. {\displaystyle 1.} The small size of the coefficients hides the fact that a number of integers GCD and divisions by the GCD have been computed.
A subresultant sequence can be also computed with pseudo-remainders. The process consists in choosing α in such a way that every r i is a subresultant polynomial. Surprisingly, the computation of α is very easy (see below). On the other hand, the proof of correctness of the algorithm is difficult, because it should take into account all the possibilities for the difference of degrees of two consecutive remainders.
The coefficients in the subresultant sequence are rarely much larger than those of the primitive pseudo-remainder sequence. As GCD computations in Z are not needed, the subresultant sequence with pseudo-remainders gives the most efficient computation.
With the same input as in the preceding sections, the successive remainders are 15 X 4 − 3 X 2 + 9 , {\displaystyle 15\,X^{4}-3\,X^{2}+9,} 65 X 2 + 125 X − 245 , {\displaystyle 65\,X^{2}+125\,X-245,} 9326 X − 12300 , {\displaystyle 9326\,X-12300,} 260708. {\displaystyle 260708.} The coefficients have a reasonable size. They are obtained without any GCD computation, only exact divisions. This makes this algorithm more efficient than that of primitive pseudo-remainder sequences.
The algorithm computing the subresultant sequence with pseudo-remainders is given below. In this algorithm, the input ( a , b ) is a pair of polynomials in Z [ X ] . The r i are the successive pseudo remainders in Z [ X ] , the variables i and d i are non negative integers, and the Greek letters denote elements in Z . The functions deg() and rem() denote the degree of a polynomial and the remainder of the Euclidean division. In the algorithm, this remainder is always in Z [ X ] . Finally the divisions denoted / are always exact and have their result either in Z [ X ] or in Z .
Note: "lc" stands for the leading coefficient, the coefficient of the highest degree of the variable.
This algorithm computes not only the greatest common divisor (the last non zero r i ), but also all the subresultant polynomials: The remainder r i is the (deg( r i −1 ) − 1) -th subresultant polynomial. If deg( r i ) < deg( r i −1 ) − 1 , the deg( r i ) -th subresultant polynomial is lc( r i ) deg( r i −1 )−deg( r i )−1 r i . All the other subresultant polynomials are zero.
One may use pseudo-remainders for constructing sequences having the same properties as Sturm sequences . This requires to control the signs of the successive pseudo-remainders, in order to have the same signs as in the Sturm sequence. This may be done by defining a modified pseudo-remainder as follows.
If deg ( A ) = a {\displaystyle \deg(A)=a} and deg ( B ) = b {\displaystyle \deg(B)=b} and a ≥ b , the modified pseudo-remainder prem2( A , B ) of the pseudo-division of A by B is prem2 ( A , B ) = − rem ( | lc ( B ) | a − b + 1 A , B ) , {\displaystyle \operatorname {prem2} (A,B)=-\operatorname {rem} (\left|\operatorname {lc} (B)\right|^{a-b+1}A,B),} where | lc( B ) | is the absolute value of the leading coefficient of B (the coefficient of X b ).
For input polynomials with integer coefficients, this allows retrieval of Sturm sequences consisting of polynomials with integer coefficients. The subresultant pseudo-remainder sequence may be modified similarly, in which case the signs of the remainders coincide with those computed over the rationals.
Note that the algorithm for computing the subresultant pseudo-remainder sequence given above will compute wrong subresultant polynomials if one uses − p r e m 2 ( A , B ) {\displaystyle -\mathrm {prem2} (A,B)} instead of prem ( A , B ) {\displaystyle \operatorname {prem} (A,B)} .
If f and g are polynomials in F [ x ] for some finitely generated field F , the Euclidean Algorithm is the most natural way to compute their GCD. However, modern computer algebra systems only use it if F is finite because of a phenomenon called intermediate expression swell . Although degrees keep decreasing during the Euclidean algorithm, if F is not finite then the bit size of the polynomials can increase (sometimes dramatically) during the computations because repeated arithmetic operations in F tends to lead to larger expressions. For example, the addition of two rational numbers whose denominators are bounded by b leads to a rational number whose denominator is bounded by b 2 , so in the worst case, the bit size could nearly double with just one operation.
To expedite the computation, take a ring D for which f and g are in D [ x ] , and take an ideal I such that D / I is a finite ring. Then compute the GCD over this finite ring with the Euclidean Algorithm. Using reconstruction techniques ( Chinese remainder theorem , rational reconstruction , etc.) one can recover the GCD of f and g from its image modulo a number of ideals I . One can prove [ 3 ] that this works provided that one discards modular images with non-minimal degrees, and avoids ideals I modulo which a leading coefficient vanishes.
Suppose F = Q ( 3 ) {\displaystyle F=\mathbb {Q} ({\sqrt {3}})} , D = Z [ 3 ] {\displaystyle D=\mathbb {Z} [{\sqrt {3}}]} , f = 3 x 3 − 5 x 2 + 4 x + 9 {\displaystyle f={\sqrt {3}}x^{3}-5x^{2}+4x+9} and g = x 4 + 4 x 2 + 3 3 x − 6 {\displaystyle g=x^{4}+4x^{2}+3{\sqrt {3}}x-6} . If we take I = ( 2 ) {\displaystyle I=(2)} then D / I {\displaystyle D/I} is a finite ring (not a field since I {\displaystyle I} is not maximal in D {\displaystyle D} ). The Euclidean algorithm applied to the images of f , g {\displaystyle f,g} in ( D / I ) [ x ] {\displaystyle (D/I)[x]} succeeds and returns 1. This implies that the GCD of f , g {\displaystyle f,g} in F [ x ] {\displaystyle F[x]} must be 1 as well. Note that this example could easily be handled by any method because the degrees were too small for expression swell to occur, but it illustrates that if two polynomials have GCD 1, then the modular algorithm is likely to terminate after a single ideal I {\displaystyle I} . | https://en.wikipedia.org/wiki/Polynomial_greatest_common_divisor |
In mathematics, polynomial identity testing (PIT) is the problem of efficiently determining whether two multivariate polynomials are identical. More formally, a PIT algorithm is given an arithmetic circuit that computes a polynomial p in a field , and decides whether p is the zero polynomial. Determining the computational complexity required for polynomial identity testing, in particular finding deterministic algorithms for PIT, is one of the most important open problems in algebraic complexity theory.
The question "Does ( x + y ) ( x − y ) {\displaystyle (x+y)(x-y)} equal x 2 − y 2 ? {\displaystyle x^{2}-y^{2}\,?} " is a question about whether two polynomials are identical. As with any polynomial identity testing question, it can be trivially transformed into the question "Is a certain polynomial equal to 0?"; in this case we can ask "Does ( x + y ) ( x − y ) − ( x 2 − y 2 ) = 0 {\displaystyle (x+y)(x-y)-(x^{2}-y^{2})=0} "? If we are given the polynomial as an algebraic expression (rather than as a black-box), we can confirm that the equality holds through brute-force multiplication and addition, but the time complexity of the brute-force approach grows as ( n + d d ) {\displaystyle {\tbinom {n+d}{d}}} , where n {\displaystyle n} is the number of variables (here, n = 2 {\displaystyle n=2} : x {\displaystyle x} is the first and y {\displaystyle y} is the second), and d {\displaystyle d} is the degree of the polynomial (here, d = 2 {\displaystyle d=2} ). If n {\displaystyle n} and d {\displaystyle d} are both large, ( n + d d ) {\displaystyle {\tbinom {n+d}{d}}} grows exponentially. [ 1 ]
PIT concerns whether a polynomial is identical to the zero polynomial, rather than whether the function implemented by the polynomial always evaluates to zero in the given domain. For example, the field with two elements, GF(2) , contains only the elements 0 and 1. In GF(2), x 2 − x {\displaystyle x^{2}-x} always evaluates to zero; despite this, PIT does not consider x 2 − x {\displaystyle x^{2}-x} to be equal to the zero polynomial. [ 2 ]
Determining the computational complexity required for polynomial identity testing is one of the most important open problems in the mathematical subfield known as "algebraic computing complexity". [ 1 ] [ 3 ] The study of PIT is a building-block to many other areas of computational complexity, such as the proof that IP = PSPACE . [ 1 ] [ 4 ] In addition, PIT has applications to Tutte matrices and also to primality testing , where PIT techniques led to the AKS primality test , the first deterministic (though impractical) polynomial time algorithm for primality testing. [ 1 ]
Given an arithmetic circuit that computes a polynomial in a field , determine whether the polynomial is equal to the zero polynomial (that is, the polynomial with no nonzero terms). [ 1 ]
In some cases, the specification of the arithmetic circuit is not given to the PIT solver, and the PIT solver can only input values into a "black box" that implements the circuit, and then analyze the output. Note that the solutions below assume that any operation (such as multiplication) in the given field takes constant time; further, all black-box algorithms below assume the size of the field is larger than the degree of the polynomial.
The Schwartz–Zippel algorithm provides a practical probabilistic solution, by simply randomly testing inputs and checking whether the output is zero. It was the first randomized polynomial time PIT algorithm to be proven correct. [ 1 ] The larger the domain the inputs are drawn from, the less likely Schwartz–Zippel is to fail. If random bits are in short supply, the Chen-Kao algorithm (over the rationals) or the Lewin-Vadhan algorithm (over any field) require fewer random bits at the cost of more required runtime. [ 2 ]
A sparse PIT has at most m {\displaystyle m} nonzero monomial terms. A sparse PIT can be deterministically solved in polynomial time of the size of the circuit and the number m {\displaystyle m} of monomials, [ 1 ] see also. [ 5 ]
A low degree PIT has an upper bound on the degree of the polynomial. Any low degree PIT problem can be reduced in subexponential time of the size of the circuit to a PIT problem for depth-four circuits; therefore, PIT for circuits of depth-four (and below) is intensely studied. [ 1 ] | https://en.wikipedia.org/wiki/Polynomial_identity_testing |
In algebra , polynomial long division is an algorithm for dividing a polynomial by another polynomial of the same or lower degree , a generalized version of the familiar arithmetic technique called long division . It can be done easily by hand, because it separates an otherwise complex division problem into smaller ones. Sometimes using a shorthand version called synthetic division is faster, with less writing and fewer calculations. Another abbreviated method is polynomial short division (Blomqvist's method).
Polynomial long division is an algorithm that implements the Euclidean division of polynomials , which starting from two polynomials A (the dividend ) and B (the divisor ) produces, if B is not zero, a quotient Q and a remainder R such that
and either R = 0 or the degree of R is lower than the degree of B . These conditions uniquely define Q and R , which means that Q and R do not depend on the method used to compute them.
The result R = 0 occurs if and only if the polynomial A has B as a factor . Thus long division is a means for testing whether one polynomial has another as a factor, and, if it does, for factoring it out. For example, if a root r of A is known, it can be factored out by dividing A by ( x – r ).
Find the quotient and the remainder of the division of ( x 3 − 2 x 2 − 4 ) {\displaystyle (x^{3}-2x^{2}-4)} , the dividend , by ( x − 3 ) {\displaystyle (x-3)} , the divisor .
The dividend is first rewritten like this:
The quotient and remainder can then be determined as follows:
The polynomial above the bar is the quotient q ( x ), and the number left over (5) is the remainder r ( x ).
The long division algorithm for arithmetic is very similar to the above algorithm, in which the variable x is replaced (in base 10) by the specific number 10.
Blomqvist's method [ 1 ] is an abbreviated version of the long division above. This pen-and-paper method uses the same algorithm as polynomial long division, but mental calculation is used to determine remainders. This requires less writing, and can therefore be a faster method once mastered.
The division is at first written in a similar way as long multiplication with the dividend at the top, and the divisor below it. The quotient is to be written below the bar from left to right.
Divide the first term of the dividend by the highest term of the divisor ( x 3 ÷ x = x 2 ). Place the result below the bar. x 3 has been divided leaving no remainder, and can therefore be marked as used by crossing it out. The result x 2 is then multiplied by the second term in the divisor −3 = −3 x 2 . Determine the partial remainder by subtracting −2 x 2 − (−3 x 2 ) = x 2 . Mark −2 x 2 as used and place the new remainder x 2 above it.
Divide the highest term of the remainder by the highest term of the divisor ( x 2 ÷ x = x ). Place the result (+x) below the bar. x 2 has been divided leaving no remainder, and can therefore be marked as used. The result x is then multiplied by the second term in the divisor −3 = −3 x . Determine the partial remainder by subtracting 0 x − (−3 x ) = 3 x . Mark 0 x as used and place the new remainder 3 x above it.
Divide the highest term of the remainder by the highest term of the divisor (3x ÷ x = 3). Place the result (+3) below the bar. 3x has been divided leaving no remainder, and can therefore be marked as used. The result 3 is then multiplied by the second term in the divisor −3 = −9. Determine the partial remainder by subtracting −4 − (−9) = 5. Mark −4 as used and place the new remainder 5 above it.
The polynomial below the bar is the quotient q ( x ), and the number left over (5) is the remainder r ( x ).
The algorithm can be represented in pseudocode as follows, where +, −, and × represent polynomial arithmetic, and lead(r) / lead(d) represents the polynomial obtained by dividing the two leading terms:
This works equally well when degree( n ) < degree( d ); in that case the result is just the trivial (0, n ).
This algorithm describes exactly the above paper and pencil method: d is written on the left of the ")"; q is written, term after term, above the horizontal line, the last term being the value of t ; the region under the horizontal line is used to compute and write down the successive values of r .
For every pair of polynomials ( A , B ) such that B ≠ 0, polynomial division provides a quotient Q and a remainder R such that
and either R =0 or degree( R ) < degree( B ). Moreover ( Q , R ) is the unique pair of polynomials having this property.
The process of getting the uniquely defined polynomials Q and R from A and B is called Euclidean division (sometimes division transformation ). Polynomial long division is thus an algorithm for Euclidean division. [ 2 ]
Sometimes one or more roots of a polynomial are known, perhaps having been found using the rational root theorem . If one root r of a polynomial P ( x ) of degree n is known then polynomial long division can be used to factor P ( x ) into the form ( x − r ) Q ( x ) where Q ( x ) is a polynomial of degree n − 1. Q ( x ) is simply the quotient obtained from the division process; since r is known to be a root of P ( x ), it is known that the remainder must be zero.
Likewise, if several roots r , s , . . . of P ( x ) are known, a linear factor ( x − r ) can be divided out to obtain Q ( x ), and then ( x − s ) can be divided out of Q ( x ), etc. Alternatively, the quadratic factor ( x − r ) ( x − s ) = x 2 − ( r + s ) x + r s {\displaystyle (x-r)(x-s)=x^{2}-(r{+}s)x+rs} can be divided out of P ( x ) to obtain a quotient of degree n − 2.
This method is especially useful for cubic polynomials, and sometimes all the roots of a higher-degree polynomial can be obtained. For example, if the rational root theorem produces a single (rational) root of a quintic polynomial , it can be factored out to obtain a quartic (fourth degree) quotient; the explicit formula for the roots of a quartic polynomial can then be used to find the other four roots of the quintic. There is, however, no general way to solve a quintic by purely algebraic methods, see Abel–Ruffini theorem .
Polynomial long division can be used to find the equation of the line that is tangent to the graph of the function defined by the polynomial P ( x ) at a particular point x = r . [ 3 ] If R ( x ) is the remainder of the division of P ( x ) by ( x − r ) 2 , then the equation of the tangent line at x = r to the graph of the function y = P ( x ) is y = R ( x ), regardless of whether or not r is a root of the polynomial.
Find the equation of the line that is tangent to the following curve y = ( x 3 − 12 x 2 − 42 ) {\displaystyle y=(x^{3}-12x^{2}-42)}
Begin by dividing the polynomial by: ( x − 1 ) 2 = ( x 2 − 2 x + 1 ) {\displaystyle (x-1)^{2}=(x^{2}-2x+1)}
The tangent line is y = ( − 21 x − 32 ) {\displaystyle y=(-21x-32)}
A cyclic redundancy check uses the remainder of polynomial division to detect errors in transmitted messages. | https://en.wikipedia.org/wiki/Polynomial_long_division |
In algebra , a polynomial map or polynomial mapping P : V → W {\displaystyle P:V\to W} between vector spaces over an infinite field k is a polynomial in linear functionals with coefficients in k ; i.e., it can be written as
where the λ i j : V → k {\displaystyle \lambda _{i_{j}}:V\to k} are linear functionals and the w i 1 , … , i n {\displaystyle w_{i_{1},\dots ,i_{n}}} are vectors in W . For example, if W = k m {\displaystyle W=k^{m}} , then a polynomial mapping can be expressed as P ( v ) = ( P 1 ( v ) , … , P m ( v ) ) {\displaystyle P(v)=(P_{1}(v),\dots ,P_{m}(v))} where the P i {\displaystyle P_{i}} are (scalar-valued) polynomial functions on V . (The abstract definition has an advantage that the map is manifestly free of a choice of basis.)
When V , W are finite-dimensional vector spaces and are viewed as algebraic varieties , then a polynomial mapping is precisely a morphism of algebraic varieties .
One fundamental outstanding question regarding polynomial mappings is the Jacobian conjecture , which concerns the sufficiency of a polynomial mapping to be invertible.
This algebra -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Polynomial_mapping |
In mathematics , a polynomial matrix or matrix of polynomials is a matrix whose elements are univariate or multivariate polynomials . Equivalently, a polynomial matrix is a polynomial whose coefficients are matrices.
A univariate polynomial matrix P of degree p is defined as:
where A ( i ) {\displaystyle A(i)} denotes a matrix of constant coefficients, and A ( p ) {\displaystyle A(p)} is non-zero.
An example 3×3 polynomial matrix, degree 2:
We can express this by saying that for a ring R , the rings M n ( R [ X ] ) {\displaystyle M_{n}(R[X])} and ( M n ( R ) ) [ X ] {\displaystyle (M_{n}(R))[X]} are isomorphic .
Note that polynomial matrices are not to be confused with monomial matrices , which are simply matrices with exactly one non-zero entry in each row and column.
If by λ we denote any element of the field over which we constructed the matrix, by I the identity matrix, and we let A be a polynomial matrix, then the matrix λ I − A is the characteristic matrix of the matrix A . Its determinant, |λ I − A | is the characteristic polynomial of the matrix A .
This article about matrices is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Polynomial_matrix |
Polynomial Matrix Spectral Factorization or Matrix Fejer–Riesz Theorem is a tool used to study the matrix decomposition of polynomial matrices . Polynomial matrices are widely studied in the fields of systems theory and control theory and have seen other uses relating to stable polynomials . In stability theory , Spectral Factorization has been used to find determinantal matrix representations for bivariate stable polynomials and real zero polynomials. [ 1 ]
Given a univariate positive polynomial , i.e., p ( t ) > 0 {\displaystyle p(t)>0} for all t ∈ R {\displaystyle t\in \mathbb {R} } , the Fejer–Riesz Theorem yields the polynomial spectral factorization p ( t ) = q ( t ) q ¯ ( t ) {\displaystyle p(t)=q(t){\bar {q}}(t)} . Results of this form are generically referred to as Positivstellensatz .
Likewise, the Polynomial Matrix Spectral Factorization provides a factorization for positive definite polynomial matrices. This decomposition also relates to the Cholesky decomposition for scalar matrices A = L L ∗ {\displaystyle A=LL^{*}} . This result was originally proven by Norbert Wiener in a more general context which was concerned with integrable matrix-valued functions that also had integrable log determinant. [ 2 ] Because applications are often concerned with the polynomial restriction, simpler proofs and individual analysis exist focusing on this case. [ 3 ] Weaker positivstellensatz conditions have been studied, specifically considering when the polynomial matrix has positive definite image on semi-algebraic subsets of the reals. [ 4 ] Many publications recently have focused on streamlining proofs for these related results. [ 5 ] [ 6 ] This article roughly follows the recent proof method of Lasha Ephremidze [ 7 ] which relies only on elementary linear algebra and complex analysis .
Spectral factorization is used extensively in linear–quadratic–Gaussian control and many algorithms exist to calculate spectral factors. [ 8 ] Some modern algorithms focus on the more general setting originally studied by Wiener while others have used Toeplitz matrix advances to speed up factor calculations. [ 9 ] [ 10 ]
Consider polynomial matrix P ( t ) = [ p 11 ( t ) … p 1 n ( t ) ⋮ ⋱ ⋮ p n 1 ( t ) ⋯ p n n ( t ) ] , {\displaystyle P(t)={\begin{bmatrix}p_{11}(t)&\ldots &p_{1n}(t)\\\vdots &\ddots &\vdots \\p_{n1}(t)&\cdots &p_{nn}(t)\\\end{bmatrix}},} where each entry p i j ( t ) {\displaystyle p_{ij}(t)} is a complex coefficient polynomial of at most N {\displaystyle N} -degree. If P ( t ) {\displaystyle P(t)} is a positive definite hermitian matrix for all t ∈ R {\displaystyle t\in \mathbb {R} } , then there exists a polynomial matrix Q ( t ) = [ q 11 ( t ) … q 1 n ( t ) ⋮ ⋱ ⋮ q n 1 ( t ) ⋯ q n n ( t ) ] , {\displaystyle Q(t)={\begin{bmatrix}q_{11}(t)&\ldots &q_{1n}(t)\\\vdots &\ddots &\vdots \\q_{n1}(t)&\cdots &q_{nn}(t)\\\end{bmatrix}},} such that P ( t ) = Q ( t ) Q ( t ) ∗ {\displaystyle P(t)=Q(t)Q(t)^{*}} where Q ( t ) ∗ {\displaystyle Q(t)^{*}} is the conjugate transpose . When q i j ( t ) {\displaystyle q_{ij}(t)} is a complex coefficient polynomial or complex coefficient rational function then so are the elements of its conjugate transpose.
We can furthermore find Q ( t ) {\displaystyle Q(t)} which is nonsingular on the lower half plane.
Let p ( t ) {\displaystyle p(t)} be a rational function where p ( t ) > 0 {\displaystyle p(t)>0} for all t ∈ R {\displaystyle t\in \mathbb {R} } . Then there exists a rational function q ( t ) {\displaystyle q(t)} such that p ( t ) = q ( t ) q ¯ ( t ) {\displaystyle p(t)=q(t){\bar {q}}(t)} and q ( t ) {\displaystyle q(t)} has no poles or zeroes in the lower half plane. This decomposition is unique up to multiplication by complex scalars of norm 1 {\displaystyle 1} .
To prove existence write p ( x ) = c ∏ i ( x − α i ) ∏ j ( x − β j ) , {\displaystyle p(x)=c{\frac {\prod _{i}(x-\alpha _{i})}{\prod _{j}(x-\beta _{j})}},} where α i ≠ β j {\displaystyle \alpha _{i}\neq \beta _{j}} . Letting x → ∞ {\displaystyle x\to \infty } , we can conclude that c {\displaystyle c} is real and positive. Dividing out by c {\displaystyle {\sqrt {c}}} we reduce to the monic case. The numerator and denominator have distinct sets of roots, so all real roots which show up in either must have even multiplicity (to prevent a sign change locally). We can divide out these real roots to reduce to the case where p ( t ) {\displaystyle p(t)} has only complex roots and poles. By hypothesis we have p ( x ) = ∏ i ( x − α i ) ∏ j ( x − β j ) = ∏ i ( x − α ¯ i ) ∏ j ( x − β ¯ j ) = p ( x ) ¯ . {\displaystyle p(x)={\frac {\prod _{i}(x-\alpha _{i})}{\prod _{j}(x-\beta _{j})}}={\frac {\prod _{i}(x-{\bar {\alpha }}_{i})}{\prod _{j}(x-{\bar {\beta }}_{j})}}={\overline {p(x)}}.} Since all of the α i , β j {\displaystyle \alpha _{i},\beta _{j}} are complex (and hence not fixed points of conjugation) they both come in conjugate pairs. For each conjugate pair, pick the zero or pole in the upper half plane and accumulate these to obtain q ( t ) {\displaystyle q(t)} . The uniqueness result follows in a standard fashion.
The inspiration for this result is a factorization which characterizes positive definite matrices.
Given any positive definite scalar matrix A {\displaystyle A} , the Cholesky decomposition allows us to write A = L L ∗ {\displaystyle A=LL^{*}} where L {\displaystyle L} is a lower triangular matrix . If we don't restrict to lower triangular matrices we can consider all factorizations of the form A = V V ∗ {\displaystyle A=VV^{*}} . It is not hard to check that all factorizations are achieved by looking at the orbit of L {\displaystyle L} under right multiplication by a unitary matrix, V = L U {\displaystyle V=LU} .
To obtain the lower triangular decomposition we induct by splitting off the first row and first column: [ a 11 a 12 ∗ a 12 A 22 ] = [ l 11 0 l 21 L 22 ] [ l 11 ∗ l 21 ∗ 0 L 22 ∗ ] = [ l 11 l 11 ∗ l 11 l 21 ∗ l 11 ∗ l 21 l 21 l 21 ∗ + L 22 L 22 ∗ ] {\displaystyle {\begin{bmatrix}a_{11}&\mathbf {a} _{12}^{*}\\\mathbf {a} _{12}&A_{22}\\\end{bmatrix}}={\begin{bmatrix}l_{11}&0\\\mathbf {l} _{21}&L_{22}\end{bmatrix}}{\begin{bmatrix}l_{11}^{*}&\mathbf {l} _{21}^{*}\\0&L_{22}^{*}\end{bmatrix}}={\begin{bmatrix}l_{11}l_{11}^{*}&l_{11}\mathbf {l} _{21}^{*}\\l_{11}^{*}\mathbf {l} _{21}&\mathbf {l} _{21}\mathbf {l} _{21}^{*}+L_{22}L_{22}^{*}\end{bmatrix}}} Solving these in terms of a i j {\displaystyle a_{ij}} we get l 11 = a 11 {\displaystyle l_{11}={\sqrt {a_{11}}}} l 21 = 1 a 11 a 12 {\displaystyle \mathbf {l} _{21}={\frac {1}{\sqrt {a_{11}}}}\mathbf {a} _{12}} L 22 L 22 ∗ = A 22 − 1 a 11 a 12 a 12 ∗ {\displaystyle L_{22}L_{22}^{*}=A_{22}-{\frac {1}{a_{11}}}\mathbf {a} _{12}\mathbf {a} _{12}^{*}}
Since A {\displaystyle A} is positive definite we have a 11 {\displaystyle a_{11}} is a positive real number, so it has a square root. The last condition from induction since the right hand side is the Schur complement of A {\displaystyle A} , which is itself positive definite.
A rational polynomial matrix is defined as a matrix P ( t ) = [ p 11 ( t ) … p 1 n ( t ) ⋮ ⋱ ⋮ p n 1 ( t ) ⋯ p n n ( t ) ] , {\displaystyle P(t)={\begin{bmatrix}p_{11}(t)&\ldots &p_{1n}(t)\\\vdots &\ddots &\vdots \\p_{n1}(t)&\cdots &p_{nn}(t)\\\end{bmatrix}},} where each entry p i j ( t ) {\displaystyle p_{ij}(t)} is a complex rational function . If P ( t ) {\displaystyle P(t)} is a positive definite Hermitian matrix for all t ∈ R {\displaystyle t\in \mathbb {R} } , then by the symmetric Gaussian elimination we performed above, all we need to show is there exists a rational q 11 ( t ) {\displaystyle q_{11}(t)} such that p 11 ( t ) = q 11 ( t ) q 11 ( t ) ∗ {\displaystyle p_{11}(t)=q_{11}(t)q_{11}(t)^{*}} , which follows from our rational spectral factorization. Once we have that then we can solve for l 11 ( t ) , l 21 ( t ) {\displaystyle l_{11}(t),\mathbf {l} _{21}(t)} . Since the Schur complement is positive definite for the real t {\displaystyle t} away from the poles and the Schur complement is a rational polynomial matrix we can induct to find L 22 {\displaystyle L_{22}} .
It is not hard to check that we get P ( t ) = L ( t ) L ( t ) ∗ {\displaystyle P(t)=L(t)L(t)^{*}} where L ( t ) {\displaystyle L(t)} is a rational polynomial matrix with no poles in the lower half plane.
One way to prove the existence of polynomial matrix spectral factorization is to apply the Cholesky decomposition to a rational polynomial matrix and modify it to remove lower half plane singularities. That is, given P ( t ) = [ p 11 ( t ) … p 1 n ( t ) ⋮ ⋱ ⋮ p n 1 ( t ) ⋯ p n n ( t ) ] , {\displaystyle P(t)={\begin{bmatrix}p_{11}(t)&\ldots &p_{1n}(t)\\\vdots &\ddots &\vdots \\p_{n1}(t)&\cdots &p_{nn}(t)\\\end{bmatrix}},} where each entry p i j ( t ) {\displaystyle p_{ij}(t)} is a complex coefficient polynomial for all t ∈ R {\displaystyle t\in \mathbb {R} } , a rational polynomial matrix L ( t ) {\displaystyle L(t)} with no lower half plane poles exists such that P ( t ) = L ( t ) L ( t ) ∗ {\displaystyle P(t)=L(t)L(t)^{*}} . Given a rational polynomial matrix U ( t ) {\displaystyle U(t)} which is unitary valued for real t {\displaystyle t} , there exists another decomposition [ clarification needed ] P ( t ) = L ( t ) L ( t ) ∗ = L ( t ) U ( t ) U ( t ) ∗ L ( t ) ∗ . {\displaystyle P(t)=L(t)L(t)^{*}=L(t)U(t)U(t)^{*}L(t)^{*}.} If det ( L ( a ) ) = 0 {\displaystyle \det(L(a))=0} then there exists a scalar unitary matrix U {\displaystyle U} such that U e 1 = a | a | . {\displaystyle Ue_{1}={\frac {a}{|a|}}.} This implies L ( t ) U {\displaystyle L(t)U} has first column vanish at a {\displaystyle a} . To remove the singularity at a {\displaystyle a} we multiply by U ( t ) = diag ( 1 , … , z − a ¯ z − a , … , 1 ) {\displaystyle U(t)=\operatorname {diag} (1,\ldots ,{\frac {z-{\bar {a}}}{z-a}},\ldots ,1)} L ( t ) U U ( t ) {\displaystyle L(t)UU(t)} has determinant with one less zero (by multiplicity) at a, without introducing any poles in the lower half plane of any of the entries.
Consider the following rational matrix decomposition A ( t ) = [ t 2 + 1 2 t 2 t t 2 + 1 ] = [ t − i 0 2 t t + i t 2 − 1 t + i ] [ t + i 2 t t − i 0 t 2 − 1 t − i ] . {\displaystyle A(t)={\begin{bmatrix}t^{2}+1&2t\\2t&t^{2}+1\\\end{bmatrix}}={\begin{bmatrix}t-i&0\\{\frac {2t}{t+i}}&{\frac {t^{2}-1}{t+i}}\\\end{bmatrix}}{\begin{bmatrix}t+i&{\frac {2t}{t-i}}\\0&{\frac {t^{2}-1}{t-i}}\\\end{bmatrix}}.} This decomposition has no poles in the upper half plane. However det ( [ t − i 0 2 t t + i t 2 − 1 t + i ] ) = ( t − 1 ) ( t − i ) ( t + 1 ) t + i , {\displaystyle \det \left({\begin{bmatrix}t-i&0\\{\frac {2t}{t+i}}&{\frac {t^{2}-1}{t+i}}\\\end{bmatrix}}\right)={\frac {(t-1)(t-i)(t+1)}{t+i}},} so we need to modify our decomposition to get rid of the singularity at − i {\displaystyle -i} . First we multiply by a scalar unitary matrix U = 1 2 [ 1 1 i − i ] , {\displaystyle U={\frac {1}{\sqrt {2}}}{\begin{bmatrix}1&1\\i&-i\\\end{bmatrix}},} such that [ t − i 0 2 t t + i t 2 − 1 t + i ] U = 1 2 [ t − i t − i i ( t − i ) 2 t + i − i ( t + i ) ] , {\displaystyle {\begin{bmatrix}t-i&0\\{\frac {2t}{t+i}}&{\frac {t^{2}-1}{t+i}}\\\end{bmatrix}}U={\frac {1}{\sqrt {2}}}{\begin{bmatrix}t-i&t-i\\i{\frac {(t-i)^{2}}{t+i}}&-i(t+i)\\\end{bmatrix}},} becomes a new candidate for our decomposition. Now the first column vanishes at t = i {\displaystyle t=i} , so we multiply through (on the right) by U ( t ) = 1 2 [ t + i t − i 0 0 1 ] , {\displaystyle U(t)={\frac {1}{\sqrt {2}}}{\begin{bmatrix}{\frac {t+i}{t-i}}&0\\0&1\end{bmatrix}},} to obtain Q ( t ) = 1 2 [ t − i t − i i ( t − i ) 2 t + i − i ( t + i ) ] U ( t ) = 1 2 [ t + i t − i i ( t − i ) − i ( t + i ) ] , {\displaystyle Q(t)={\frac {1}{\sqrt {2}}}{\begin{bmatrix}t-i&t-i\\i{\frac {(t-i)^{2}}{t+i}}&-i(t+i)\\\end{bmatrix}}U(t)={\frac {1}{2}}{\begin{bmatrix}t+i&t-i\\i(t-i)&-i(t+i)\\\end{bmatrix}},} where det ( Q ( t ) ) = i ( 1 − t 2 ) . {\displaystyle \det(Q(t))=i(1-t^{2}).} This is our desired decomposition A ( t ) = Q ( t ) Q ( t ) ∗ {\displaystyle A(t)=Q(t)Q(t)^{*}} with no singularities in the lower half plane.
After modifications, the decomposition P ( t ) = Q ( t ) Q ( t ) ∗ {\displaystyle P(t)=Q(t)Q(t)^{*}} satisfies Q ( t ) {\displaystyle Q(t)} is holomorphic and invertible on the lower half plane. To extend analyticity to the upper half plane we need this key observation: If an invertible rational matrix Q ( t ) {\displaystyle Q(t)} is holomorphic in the lower half plane, Q ( t ) − 1 {\displaystyle Q(t)^{-1}} is holomorphic in the lower half plane as well. The analyticity follows from the adjugate matrix formula (since both the entries of Q ( t ) {\displaystyle Q(t)} and det ( Q ( t ) ) − 1 {\displaystyle \det(Q(t))^{-1}} are analytic on the lower half plane). The determinant of a rational polynomial matrix can only have poles where its entries have poles, so det ( Q ( t ) ) {\displaystyle \det(Q(t))} has no poles in the lower half plane.
Subsequently, Q ( t ) = ( P ( t ) Q ( t ) − 1 ) ∗ . {\displaystyle Q(t)=(P(t)Q(t)^{-1})^{*}.} Since Q ( t ) − 1 {\displaystyle Q(t)^{-1}} is analytic on the lower half plane, Q ( t ) {\displaystyle Q(t)} is analytic on the upper half plane. Finally if Q ( t ) {\displaystyle Q(t)} has a pole on the real line then Q ( t ) ∗ {\displaystyle Q(t)^{*}} has the same pole on the real line which contradicts the hypothesis that P ( t ) {\displaystyle P(t)} has no poles on the real line (i.e. it is analytic everywhere).
The above shows that if Q ( t ) {\displaystyle Q(t)} is analytic and invertible on the lower half plane indeed Q ( t ) {\displaystyle Q(t)} is analytic everywhere and hence a polynomial matrix.
Given two polynomial matrix decompositions which are invertible on the lower half plane P ( t ) = Q ( t ) Q ( t ) ∗ = R ( t ) R ( t ) ∗ , {\displaystyle P(t)=Q(t)Q(t)^{*}=R(t)R(t)^{*},} then R ( t ) − 1 Q ( t ) Q ( t ) ∗ ( R ( t ) ∗ ) − 1 = I . {\displaystyle R(t)^{-1}Q(t)Q(t)^{*}(R(t)^{*})^{-1}=I.} Since R ( t ) {\displaystyle R(t)} is analytic on the lower half plane and nonsingular, R ( t ) − 1 Q ( t ) {\displaystyle R(t)^{-1}Q(t)} is a rational polynomial matrix which is analytic and invertible on the lower half plane. As such, R ( t ) − 1 Q ( t ) {\displaystyle R(t)^{-1}Q(t)} is a polynomial matrix which is unitary for all t ∈ R {\displaystyle t\in \mathbb {R} } . This means that if q i ( t ) {\displaystyle \mathbf {q} _{i}(t)} is the i t h {\displaystyle i^{th}} row of Q ( t ) {\displaystyle Q(t)} then q i ( t ) q i ( t ) ∗ = 1 {\displaystyle \mathbf {q} _{i}(t)\mathbf {q} _{i}(t)^{*}=1} . For real t {\displaystyle t} this is a sum of non-negative polynomials which sums to a constant, implying each of the summands are in fact constant polynomials. Then Q ( t ) = R ( t ) U {\displaystyle Q(t)=R(t)U} where U {\displaystyle U} is a scalar unitary matrix. | https://en.wikipedia.org/wiki/Polynomial_matrix_spectral_factorization |
In mathematics, the polynomial method is an algebraic approach to combinatorics problems that involves capturing some combinatorial structure using polynomials and proceeding to argue about their algebraic properties. Recently (around 2016), the polynomial method has led to the development of remarkably simple solutions to several long-standing open problems. [ 1 ] The polynomial method encompasses a wide range of specific techniques for using polynomials and ideas from areas such as algebraic geometry to solve combinatorics problems. While a few techniques that follow the framework of the polynomial method, such as Alon's Combinatorial Nullstellensatz , [ 2 ] have been known since the 1990s, it was not until around 2010 that a broader framework for the polynomial method has been developed.
Many uses of the polynomial method follow the same high-level approach. The approach is as follows:
As an example, we outline Dvir's proof of the Finite Field Kakeya Conjecture using the polynomial method. [ 3 ]
Finite Field Kakeya Conjecture : Let F q {\displaystyle \mathbb {F} _{q}} be a finite field with q {\displaystyle q} elements. Let K ⊆ F q n {\displaystyle K\subseteq \mathbb {F} _{q}^{n}} be a Kakeya set, i.e. for each vector y ∈ F q n {\displaystyle y\in \mathbb {F} _{q}^{n}} there exists x ∈ F q n {\displaystyle x\in \mathbb {F} _{q}^{n}} such that K {\displaystyle K} contains a line { x + t y , t ∈ F q } {\displaystyle \{x+ty,t\in \mathbb {F} _{q}\}} . Then the set K {\displaystyle K} has size at least c n q n {\displaystyle c_{n}q^{n}} where c n > 0 {\displaystyle c_{n}>0} is a constant that only depends on n {\displaystyle n} .
Proof: The proof we give will show that K {\displaystyle K} has size at least c n q n − 1 {\displaystyle c_{n}q^{n-1}} . The bound of c n q n {\displaystyle c_{n}q^{n}} can be obtained using the same method with a little additional work.
Assume we have a Kakeya set K {\displaystyle K} with
| K | < ( q + n − 3 n − 1 ) {\displaystyle |K|<{q+n-3 \choose n-1}}
Consider the set of monomials of the form x 1 d 1 x 2 d 2 … x n d n {\displaystyle x_{1}^{d_{1}}x_{2}^{d_{2}}\dots x_{n}^{d_{n}}} of degree exactly q − 2 {\displaystyle q-2} . There are exactly ( q + n − 3 n − 1 ) {\displaystyle {q+n-3 \choose n-1}} such monomials. Thus, there exists a nonzero homogeneous polynomial P ( x 1 , x 2 , … , x n ) {\displaystyle P(x_{1},x_{2},\dots ,x_{n})} of degree q − 2 {\displaystyle q-2} that vanishes on all points in K {\displaystyle K} . Note this is because finding such a polynomial reduces to solving a system of | K | {\displaystyle |K|} linear equations for the coefficients.
Now we will use the property that K {\displaystyle K} is a Kakeya set to show that P {\displaystyle P} must vanish on all of F q n {\displaystyle \mathbb {F} _{q}^{n}} . Clearly P ( 0 , 0 … , 0 ) = 0 {\displaystyle P(0,0\dots ,0)=0} . Next, for y ≠ 0 {\displaystyle y\neq 0} , there is an x {\displaystyle x} such that the line { x + t y , t ∈ F q } {\displaystyle \{x+ty,t\in \mathbb {F} _{q}\}} is contained in K {\displaystyle K} . Since P {\displaystyle P} is homogeneous, if P ( z ) = 0 {\displaystyle P(z)=0} for some z ∈ F q n {\displaystyle z\in \mathbb {F} _{q}^{n}} then P ( c z ) = 0 {\displaystyle P(cz)=0} for any c ∈ F q {\displaystyle c\in \mathbb {F} _{q}} . In particular
P ( t x + y ) = P ( t ( x + t − 1 y ) ) = 0 {\displaystyle P(tx+y)=P(t(x+t^{-1}y))=0}
for all nonzero t ∈ F q {\displaystyle t\in \mathbb {F} _{q}} . However, P ( t x + y ) {\displaystyle P(tx+y)} is a polynomial of degree q − 2 {\displaystyle q-2} in t {\displaystyle t} but it has at least q − 1 {\displaystyle q-1} roots corresponding to the nonzero elements of F q {\displaystyle \mathbb {F} _{q}} so it must be identically zero. In particular, plugging in t = 0 {\displaystyle t=0} we deduce P ( y ) = 0 {\displaystyle P(y)=0} .
We have shown that P ( y ) = 0 {\displaystyle P(y)=0} for all y ∈ F q n {\displaystyle y\in \mathbb {F} _{q}^{n}} but P {\displaystyle P} has degree less than q − 1 {\displaystyle q-1} in each of the variables so this is impossible by the Schwartz–Zippel lemma . We deduce that we must actually have
| K | ≥ ( q + n − 3 n − 1 ) ∼ q n − 1 ( n − 1 ) ! {\displaystyle |K|\geq {q+n-3 \choose n-1}\sim {\frac {q^{n-1}}{(n-1)!}}}
A variation of the polynomial method, often called polynomial partitioning, was introduced by Guth and Katz in their solution to the Erdős distinct distances problem . [ 4 ] Polynomial partitioning involves using polynomials to divide the underlying space into regions and arguing about the geometric structure of the partition. These arguments rely on results from algebraic geometry bounding the number of incidences between various algebraic curves. The technique of polynomial partitioning has been used to give a new proof of the Szemerédi–Trotter theorem via the polynomial ham sandwich theorem and has been applied to a variety of problems in incidence geometry. [ 5 ] [ 6 ]
A few examples of longstanding open problems that have been solved using the polynomial method are: | https://en.wikipedia.org/wiki/Polynomial_method_in_combinatorics |
Finding the roots of polynomials is a long-standing problem that has been extensively studied throughout the history and substantially influenced the development of mathematics. It involves determining either a numerical approximation or a closed-form expression of the roots of a univariate polynomial, i.e., determining approximate or closed form solutions of x {\displaystyle x} in the equation
a 0 + a 1 x + a 2 x 2 + ⋯ + a n x n = 0 {\displaystyle a_{0}+a_{1}x+a_{2}x^{2}+\cdots +a_{n}x^{n}=0}
where a i {\displaystyle a_{i}} are either real or complex numbers .
Efforts to understand and solve polynomial equations led to the development of important mathematical concepts, including irrational and complex numbers, as well as foundational structures in modern algebra such as fields, rings, and groups. Despite of being historically important, finding the roots of higher degree polynomials no longer play a central role in mathematics and computational mathematics, with one major exception in computer algebra . [ 1 ]
Closed-form formulas for polynomial roots exist only when the degree of the polynomial is less than 5. The quadratic formula has been known since antiquity, and the cubic and quartic formulas were discovered in full generality during the 16th century.
When the degree of polynomial is at least 5, a closed-form expression for the roots by the polynomial coefficients does not exist in general, if we only uses additions, subtractions, multiplications, divisions, and radicals (taking n-th roots) in the formula. This is due to the celebrated Abel-Ruffini theorem . On the other hand, the fundamental theorem of algebra shows that all nonconstant polynomials have at least one root. Therefore, root-finding algorithms consists of finding numerical solutions in most cases.
Root-finding algorithms can be broadly categorized according to the goal of the computation. Some methods aim to find a single root, while others are designed to find all complex roots at once. In certain cases, the objective may be to find roots within a specific region of the complex plane. It is often desirable and even necessary to select algorithms specific to the computational task due to efficiency and accuracy reasons. See Root Finding Methods for a summary of the existing methods available in each case.
The root-finding problem of polynomials was first recognized by the Sumerians and then the Babylonians. Since then, the search for closed-form formulas for polynomial equations lasted for thousands of years.
The Babylonions and Egyptians were able to solve specific quadratic equations in the second millennium BCE, and their solutions essentially correspond to the quadratic formula. [ 2 ]
However, it took 2 millennia of effort to state the quadratic formula in an explicit form similar to the modern formulation, provided by Indian Mathematician Brahmagupta in his book Brāhmasphuṭasiddhānta 625 CE. The full recognition of the quadratic formula requires the introduction of complex numbers, which took another a millennia.
The first breakthrough in a closed-form formula of polynomials with degree higher than 2 took place in Italy. In the early 16th century, the Italian mathematician Scipione del Ferro found a closed-form formula for cubic equations of the form x 3 + m x = n {\displaystyle x^{3}+mx=n} , where m , n {\displaystyle m,n} are nonnegative numbers. Later, Niccolò Tartaglia also discovered methods to solve such cubic equations, and Gerolamo Cardano summarized and published their work in his book Ars Magna in 1545.
Meanwhile, Cardano's student Lodovico Ferrari discovered the closed-form formula of the quartic equations in 1540. His solution is based on the closed-form formula of the cubic equations, thus had to wait until the cubic formula to be published.
In Ars Magna, Cardano noticed that Tartaglia's method sometimes involves extracting the square root of a negative number. In fact, this could happen even if the roots are real themselves . Later, the Italian mathematician Rafael Bombelli investigated further into these mathematical objects by giving an explicit arithmetic rules in his book Algebra published in 1569. These mathematical objects are now known as the complex numbers , which are foundational in mathematics, physics, and engineering.
Since the discovery of cubic and quartic formulas, solving quintic equations in a closed form had been a major problem in algebra. The French lawyer Viete , who first formulated the root formula for cubics in modern language and applied trigonometric methods to root-solving, believed that his methods generalize to a closed-form formula in radicals for polynomial with arbitrary degree. Descartes also hold the same opinion. [ 3 ]
However, Lagrange noticed the flaws in these arguments in his 1771 paper Reflections on the Algebraic Theory of Equations , where he analyzed why the methods used to solve the cubics and quartics would not work to solve the quintics. His argument involves studying the permutation of the roots of polynomial equations. Nevertheless, Lagrange still believed that closed-form formula in radicals of the quintics exist. Gauss seems to have been the first prominent mathematician who suspected the insolvability of the quintics, stated in his 1799 doctoral dissertation.
The first serious attempt at proving the insolvability of the quintic was given by the Italian mathematician Paolo Ruffini. He published six versions of his proof between 1799 and 1813, yet his proof was not widely accepted as the writing was long and difficult to understand, and turned out to have a gap.
The first rigorous and accepted proof of the insolvability of the quintic was famously given by Niels Henrik Abel in 1824, which made essential use of the Galois theory of field extensions. In the paper, Abel proved that polynomials with degree more than 4 do not have a closed-form root formula by radicals in general. This puts an end in the search of closed form formulas of the roots of polynomials by radicals of the polynomial coefficients.
Since finding a closed-form formula of higher degree polynomials is significantly harder than that of quadratic equations, the earliest attempts to solve cubic equations are either geometrical or numerical. Also, for practical purposes, numerical solutions are necessary.
The earliest iterative approximation methods of root-finding were developed to compute square roots. In Heron of Alexandria 's book Metrica (1st-2nd century CE), approximate values of square roots were computed by iteratively improving an initial estimate. [ 4 ] Jamshīd al-Kāshī presented a generalized version of the method to compute n {\displaystyle n} th root s. A similar method was also found in Henry Briggs 's publication Trigonometria Britannica in 1633. Franciscus Vieta also developed an approximation method that is almost identical to Newton's method.
Newton further generalized the method to compute the roots of arbitrary polynomials in De analysi per aequationes numero terminorum infinitas (written in 1669, published in 1711), now known as Newton's method . In 1690, Joseph Raphson published a refinement of Newton's method, presenting it in a form that more closely aligned with the modern version used today. [ 5 ]
In 1879, the English mathematician Arthur Cayley noticed the difficulties in generalizing Newton's method to complex roots of polynomials with degree greater than 2 and complex initial values in his paper The Newton–Fourier imaginary problem. This opened the way to the study of the theory of iterations of rational functions.
A class of methods of finding numerical value of real roots is based on real-root isolation . The first example of such method is given by René Descartes in 1637. It counts the roots of a polynomial by examining sign changes in its coefficients. In 1807, the French mathematician François Budan de Boislaurent generalized Descarte's result into Budan's theorem which counts the real roots in a half-open interval ( a , b ]. However, both methods are not suitable as an effective algorithm.
The first complete real-root isolation algorithm was given by Jacques Charles François Sturm in 1829, known as the Sturm's theorem .
In 1836, Alexandre Joseph Hidulphe Vincent proposed a method for isolating real roots of polynomials using continued fractions, a result now known as Vincent's theorem . The work was largely forgotten until it was rediscovered over a century later by J. V. Uspensky , who included it in his 1948 textbook Theory of Equations . The theorem was subsequently brought to wider academic attention by the American mathematician Alkiviadis G. Akritas , who recognized its significance while studying Uspensky's account. [ 6 ] [ 7 ] The first implimentation of real-root isolation method by modern computer is given by G.E. Collins and Alkiviadis G. Akritas in 1976, where they proved an effective version of Vincent's theorem. Variants of the algorithm were subsequently studied. [ 8 ]
Before electronic computers were invented, people used mechanical computers to automate the polynomial-root solving problems. In 1758, the Hungarian scientist J.A. De Segner proposed a design of root-solving machine in his paper, which operates by drawing the graph of the polynomial on a plane and find the roots as the intersections of the graph with x-axis. In 1770, the English mathematician Jack Rowning investigated the possibility of drawing the graph of polynomials via local motions. [ 9 ]
In 1845, the English mathematician Francis Bushforth proposed to use trignometric methods to simplify the root finding problem. Given a polynomial a 0 + a 1 x + . . . + a n x n = 0 {\displaystyle a_{0}+a_{1}x+...+a_{n}x^{n}=0} , substitute x = cos t {\displaystyle x=\cos t} . Since cos n t {\displaystyle \cos ^{n}t} can be written as a linear combination of cos k t , k ∈ Z {\displaystyle \cos kt,k\in \mathbb {Z} } (See Chebyshev polynomials ), the polynomial can be reformulated into the following form
b 0 + b 1 cos t + b 2 cos 2 t + . . . + b n cos n t {\displaystyle b_{0}+b_{1}\cos t+b_{2}\cos ^{2}t+...+b_{n}\cos ^{n}t}
Such curves can be drawn by a harmonic analyzer (also known as tide predicting machines). [ 10 ] The first harmonic analyzer was built by Lord Kelvin in 1872, while Bashforth envisioned such machine in his paper 27 years ago. [ 11 ]
The Spanish engineer and mathematician Leonardo Torres Quevedo built several machines for solving real and complex roots of polynomials between 1893-1900. His machine employs a logarithmic algorithm, and has a mechanical component called the Endless principle to the value of log ( a + b ) {\displaystyle \log(a+b)} from log a , log b {\displaystyle \log a,\log b} with high accuracy. This allow him to achieve high accuracy in polynomial root-finding: the machine computes the roots of deg 8 polynomials with an accuracy of 10 − 3 {\displaystyle 10^{-3}} . [ 12 ]
The most widely used method for computing a root of any differentiable function f {\displaystyle f} is Newton's method , in which an initial guess x 0 {\displaystyle x_{0}} is iteratively refined. At each iteration the tangent line to f {\displaystyle f} at x n {\displaystyle x_{n}} is used as a linear approximation to f {\displaystyle f} , and its root is used as the succeeding guess x n + 1 {\displaystyle x_{n+1}} :
In general, the value of x n {\displaystyle x_{n}} will converge to a root of f {\displaystyle f} .
In particular, the method can be applied to compute a root of polynomial functions. In this case, the computations in Newton's method can be accelerated using Horner's method or evaluation with preprocessing for computing the polynomial and its derivative in each iteration.
Though the rate of convergence of Newton's method is generally quadratic , it might converge much slowly or even not converge at all. In particular, if the polynomial has no real root, and x 0 {\displaystyle x_{0}} is chosen to be a real number, then Newton's method cannot converge. However, if the polynomial has a real root, which is larger than the larger real root of its derivative, then Newton's method converges quadratically to this largest root if x 0 {\displaystyle x_{0}} is larger than this larger root (there are easy ways for computing an upper bound of the roots, see Properties of polynomial roots ). This is the starting point of Horner's method for computing the roots.
Closely related to Newton's method are Halley's method and Laguerre's method . Both use the polynomial and its two first derivations for an iterative process that has a cubic convergence . Combining two consecutive steps of these methods into a single test, one gets a rate of convergence of 9, at the cost of 6 polynomial evaluations (with Horner's rule). On the other hand, combining three steps of Newtons method gives a rate of convergence of 8 at the cost of the same number of polynomial evaluation. This gives a slight advantage to these methods (less clear for Laguerre's method, as a square root has to be computed at each step).
When applying these methods to polynomials with real coefficients and real starting points, Newton's and Halley's method stay inside the real number line. One has to choose complex starting points to find complex roots. In contrast, the Laguerre method with a square root in its evaluation will leave the real axis of its own accord.
Both the Aberth method and the similar yet simpler Durand–Kerner method simultaneously find all of the roots using only simple complex number arithmetic. The Aberth method is presently the most efficient method. Accelerated algorithms for multi-point evaluation and interpolation similar to the fast Fourier transform can help speed them up for large degrees of the polynomial.
A free implementation of Aberth's method is available under the name of MPSolve . This is a reference implementation, which can find routinely the roots of polynomials of degree larger than 1,000, with more than 1,000 significant decimal digits.
Another method with this style is the Dandelin–Gräffe method (sometimes also ascribed to Lobachevsky ), which uses polynomial transformations to repeatedly and implicitly square the roots. This greatly magnifies variances in the roots. Applying Viète's formulas , one obtains easy approximations for the modulus of the roots, and with some more effort, for the roots themselves.
Arguably, the most reliable method to find all roots of a polynomial is to find the eigenvalues of the companion matrix of monic polynomial, which coincides with the roots of the polynomial. There are plenty of algorithms for computing the eigenvalue of matrices. The standard method for finding all roots of a polynomial in MATLAB uses the Francis QR algorithm to compute the eigenvalues of the corresponding companion matrix of the polynomial. [ 13 ]
In principle, can use any eigenvalue algorithm to find the roots of the polynomial. However, for efficiency reasons one prefers methods that employ the structure of the matrix, that is, can be implemented in matrix-free form. Among these methods are the power method , whose application to the transpose of the companion matrix is the classical Bernoulli's method to find the root of greatest modulus. The inverse power method with shifts, which finds some smallest root first, is what drives the complex ( cpoly ) variant of the Jenkins–Traub algorithm and gives it its numerical stability. Additionally, it has fast convergence with order 1 + φ ≈ 2.6 {\displaystyle 1+\varphi \approx 2.6} (where φ {\displaystyle \varphi } is the golden ratio ) even in the presence of clustered roots. This fast convergence comes with a cost of three polynomial evaluations per step, resulting in a residual of O (| f ( x )| 2+3 φ ) , that is a slower convergence than with three steps of Newton's method.
The oldest method of finding all roots is to start by finding a single root. When a root r has been found, it can be removed from the polynomial by dividing out the binomial x – r . The resulting polynomial contains the remaining roots, which can be found by iterating on this process. This idea, despite being common in theoretical deriviations, does not work well in numerical computations because of the phenomenon of numerical instability : Wilkinson's polynomial shows that a very small modification of one coefficient may change dramatically not only the value of the roots, but also their nature (real or complex). Also, even with a good approximation, when one evaluates a polynomial at an approximate root, one may get a result that is far to close to zero. For example, if a polynomial of degree 20 (the degree of Wilkinson's polynomial) has a root close to 10, the derivative of the polynomial at the root may be of the order of 10 20 ; {\displaystyle 10^{20};} this implies that an error of 10 − 10 {\displaystyle 10^{-10}} on the value of the root may produce a value of the polynomial at the approximate root that is of the order of 10 10 . {\displaystyle 10^{10}.}
Finding the real roots of a polynomial with real coefficients is a problem that has received much attention since the beginning of 19th century, and is still an active domain of research.
Methods for finding all complex roots can provide the real roots. However, because of the numerical instability of polynomials, it may need arbitrary-precision arithmetic to decide whether a root with a small imaginary part is real or not. Moreover, as the number of the real roots is, on the average, proportional to the logarithm of the degree, [ 14 ] it is a waste of computer resources to compute the non-real roots when one is interested in real roots.
The standard way of computing real roots is to compute first disjoint intervals, called isolating intervals , such that each one contains exactly one real root, and together they contain all the roots. This computation is called real-root isolation . Having an isolating interval, one may use fast numerical methods, such as Newton's method for improving the precision of the result.
The oldest complete algorithm for real-root isolation results from Sturm's theorem . However, it appears to be much less efficient than the methods based on Descartes' rule of signs and its extensions— Budan's and Vincent's theorems . These methods divide into two main classes, one using continued fractions and the other using bisection. Both method have been dramatically improved since the beginning of 21st century. With these improvements they reach a computational complexity that is similar to that of the best algorithms for computing all the roots (even when all roots are real).
These algorithms have been implemented and are available in Mathematica (continued fraction method) and Maple (bisection method), as well as in other main computer algebra systems ( SageMath , PARI/GP ) . Both implementations can routinely find the real roots of polynomials of degree higher than 1,000.
Several fast tests exist that tell if a segment of the real line or a region of the complex plane contains no roots. By bounding the modulus of the roots and recursively subdividing the initial region indicated by these bounds, one can isolate small regions that may contain roots and then apply other methods to locate them exactly.
All these methods involve finding the coefficients of shifted and scaled versions of the polynomial. For large degrees, FFT -based accelerated methods become viable.
The Lehmer–Schur algorithm uses the Schur–Cohn test for circles; a variant, Wilf's global bisection algorithm uses a winding number computation for rectangular regions in the complex plane.
The splitting circle method uses FFT-based polynomial transformations to find large-degree factors corresponding to clusters of roots. The precision of the factorization is maximized using a Newton-type iteration. This method is useful for finding the roots of polynomials of high degree to arbitrary precision; it has almost optimal complexity in this setting. [ citation needed ]
If the given polynomial only has real coefficients, one may wish to avoid computations with complex numbers. To that effect, one has to find quadratic factors for pairs of conjugate complex roots. The application of the multidimensional Newton's method to this task results in Bairstow's method .
The real variant of Jenkins–Traub algorithm is an improvement of this method.
For polynomials whose coefficients are exactly given as integers or rational numbers , there is an efficient method to factorize them into factors that have only simple roots and whose coefficients are also given in precise terms. This method, called square-free factorization , is based on the multiple roots of a polynomial being the roots of the greatest common divisor of the polynomial and its derivative.
The square-free factorization of a polynomial p is a factorization p = p 1 p 2 2 ⋯ p k k {\displaystyle p=p_{1}p_{2}^{2}\cdots p_{k}^{k}} where each p i {\displaystyle p_{i}} is either 1 or a polynomial without multiple roots, and two different p i {\displaystyle p_{i}} do not have any common root.
An efficient method to compute this factorization is Yun's algorithm . | https://en.wikipedia.org/wiki/Polynomial_root-finding |
In mathematics a P-recursive equation can be solved for polynomial solutions . Sergei A. Abramov in 1989 and Marko Petkovšek in 1992 described an algorithm which finds all polynomial solutions of those recurrence equations with polynomial coefficients. [ 1 ] [ 2 ] The algorithm computes a degree bound for the solution in a first step. In a second step an ansatz for a polynomial of this degree is used and the unknown coefficients are computed by a system of linear equations . This article describes this algorithm.
In 1995 Abramov, Bronstein and Petkovšek showed that the polynomial case can be solved more efficiently by considering power series solution of the recurrence equation in a specific power basis (i.e. not the ordinary basis ( x n ) n ∈ N {\textstyle (x^{n})_{n\in \mathbb {N} }} ). [ 3 ]
Other algorithms which compute rational or hypergeometric solutions of a linear recurrence equation with polynomial coefficients also use algorithms which compute polynomial solutions.
Let K {\textstyle \mathbb {K} } be a field of characteristic zero and ∑ k = 0 r p k ( n ) y ( n + k ) = f ( n ) {\textstyle \sum _{k=0}^{r}p_{k}(n)\,y(n+k)=f(n)} a recurrence equation of order r {\textstyle r} with polynomial coefficients p k ∈ K [ n ] {\textstyle p_{k}\in \mathbb {K} [n]} , polynomial right-hand side f ∈ K [ n ] {\textstyle f\in \mathbb {K} [n]} and unknown polynomial sequence y ( n ) ∈ K [ n ] {\displaystyle y(n)\in \mathbb {K} [n]} . Furthermore deg ( p ) {\textstyle \deg(p)} denotes the degree of a polynomial p ∈ K [ n ] {\textstyle p\in \mathbb {K} [n]} (with deg ( 0 ) = − ∞ {\textstyle \deg(0)=-\infty } for the zero polynomial) and lc ( p ) {\textstyle {\text{lc}}(p)} denotes the leading coefficient of the polynomial. Moreover let q i = ∑ k = i r ( k i ) p k , b = max i = 0 , … , r ( deg ( q i ) − i ) , α ( n ) = ∑ i = 0 , … , r deg ( q i ) − i = b lc ( q i ) n i _ , d α = max { n ∈ N : α ( n ) = 0 } ∪ { − ∞ } {\displaystyle {\begin{aligned}q_{i}&=\sum _{k=i}^{r}{\binom {k}{i}}p_{k},&b&=\max _{i=0,\dots ,r}(\deg(q_{i})-i),\\\alpha (n)&=\sum _{i=0,\dots ,r \atop \deg(q_{i})-i=b}{\text{lc}}(q_{i})n^{\underline {i}},&d_{\alpha }&=\max\{n\in \mathbb {N} \,:\,\alpha (n)=0\}\cup \{-\infty \}\end{aligned}}} for i = 0 , … , r {\textstyle i=0,\dots ,r} where n i _ = n ( n − 1 ) ⋯ ( n − i + 1 ) {\textstyle n^{\underline {i}}=n(n-1)\cdots (n-i+1)} denotes the falling factorial and N {\textstyle \mathbb {N} } the set of nonnegative integers. Then deg ( y ) ≤ max { deg ( f ) − b , − b − 1 , d α } {\textstyle \deg(y)\leq \max\{\deg(f)-b,-b-1,d_{\alpha }\}} . This is called a degree bound for the polynomial solution y {\textstyle y} . This bound was shown by Abramov and Petkovšek. [ 1 ] [ 2 ] [ 3 ] [ 4 ]
The algorithm consists of two steps. In a first step the degree bound is computed. In a second step an ansatz with a polynomial y {\textstyle y} of that degree with arbitrary coefficients in K {\textstyle \mathbb {K} } is made and plugged into the recurrence equation. Then the different powers are compared and a system of linear equations for the coefficients of y {\textstyle y} is set up and solved. This is called the method undetermined coefficients . [ 5 ] The algorithm returns the general polynomial solution of a recurrence equation.
Applying the formula for the degree bound on the recurrence equation ( n 2 − 2 ) y ( n ) + ( − n 2 + 2 n ) y ( n + 1 ) = 2 n , {\displaystyle (n^{2}-2)\,y(n)+(-n^{2}+2n)\,y(n+1)=2n,} over Q {\textstyle \mathbb {Q} } yields deg ( y ) ≤ 2 {\textstyle \deg(y)\leq 2} . Hence one can use an ansatz with a quadratic polynomial y ( n ) = y 2 n 2 + y 1 n + y 0 {\textstyle y(n)=y_{2}n^{2}+y_{1}n+y_{0}} with y 0 , y 1 , y 2 ∈ Q {\textstyle y_{0},y_{1},y_{2}\in \mathbb {Q} } . Plugging this ansatz into the original recurrence equation leads to 2 n = ( n 2 − 2 ) y ( n ) + ( − n 2 + 2 n ) y ( n + 1 ) = ( y 1 + y 2 ) n 2 + ( 2 y 0 + 2 y 2 ) n − 2 y 0 . {\displaystyle 2n=(n^{2}-2)\,y(n)+(-n^{2}+2n)\,y(n+1)=(y_{1}+y_{2})\,n^{2}+(2y_{0}+2y_{2})\,n-2y_{0}.} This is equivalent to the following system of linear equations ( 0 1 1 2 0 2 − 2 0 0 ) ( y 0 y 1 y 2 ) = ( 0 2 0 ) {\displaystyle {\begin{aligned}{\begin{pmatrix}0&1&1\\2&0&2\\-2&0&0\end{pmatrix}}{\begin{pmatrix}y_{0}\\y_{1}\\y_{2}\end{pmatrix}}={\begin{pmatrix}0\\2\\0\end{pmatrix}}\end{aligned}}} with the solution y 0 = 0 , y 1 = − 1 , y 2 = 1 {\textstyle y_{0}=0,y_{1}=-1,y_{2}=1} . Therefore the only polynomial solution is y ( n ) = n 2 − n {\textstyle y(n)=n^{2}-n} . | https://en.wikipedia.org/wiki/Polynomial_solutions_of_P-recursive_equations |
In mathematics , a polynomial transformation consists of computing the polynomial whose roots are a given function of the roots of a polynomial. Polynomial transformations such as Tschirnhaus transformations are often used to simplify the solution of algebraic equations .
Let
be a polynomial, and
be its complex roots (not necessarily distinct).
For any constant c , the polynomial whose roots are
is
If the coefficients of P are integers and the constant c = p q {\displaystyle c={\frac {p}{q}}} is a rational number , the coefficients of Q may be not integers, but the polynomial c n Q has integer coefficients and has the same roots as Q .
A special case is when c = a 1 n a 0 . {\displaystyle c={\frac {a_{1}}{na_{0}}}.} The resulting polynomial Q does not have any term in y n − 1 .
Let
be a polynomial. The polynomial whose roots are the reciprocals of the roots of P as roots is its reciprocal polynomial
Let
be a polynomial, and c be a non-zero constant. A polynomial whose roots are the product by c of the roots of P is
The factor c n appears here because, if c and the coefficients of P are integers or belong to some integral domain , the same is true for the coefficients of Q .
In the special case where c = a 0 {\displaystyle c=a_{0}} , all coefficients of Q are multiple of c , and Q c {\displaystyle {\frac {Q}{c}}} is a monic polynomial , whose coefficients belong to any integral domain containing c and the coefficients of P . This polynomial transformation is often used to reduce questions on algebraic numbers to questions on algebraic integers .
Combining this with a translation of the roots by a 1 n a 0 {\displaystyle {\frac {a_{1}}{na_{0}}}} , allows to reduce any question on the roots of a polynomial, such as root-finding , to a similar question on a simpler polynomial, which is monic and does not have a term of degree n − 1 . For examples of this, see Cubic function § Reduction to a depressed cubic or Quartic function § Converting to a depressed quartic .
All preceding examples are polynomial transformations by a rational function , also called Tschirnhaus transformations . Let
be a rational function, where g and h are coprime polynomials. The polynomial transformation of a polynomial P by f is the polynomial Q (defined up to the product by a non-zero constant) whose roots are the images by f of the roots of P .
Such a polynomial transformation may be computed as a resultant . In fact, the roots of the desired polynomial Q are exactly the complex numbers y such that there is a complex number x such that one has simultaneously (if the coefficients of P , g and h are not real or complex numbers, "complex number" has to be replaced by "element of an algebraically closed field containing the coefficients of the input polynomials" )
This is exactly the defining property of the resultant
This is generally difficult to compute by hand. However, as most computer algebra systems have a built-in function to compute resultants, it is straightforward to compute it with a computer .
If the polynomial P is irreducible , then either the resulting polynomial Q is irreducible, or it is a power of an irreducible polynomial. Let α {\displaystyle \alpha } be a root of P and consider L , the field extension generated by α {\displaystyle \alpha } . The former case means that f ( α ) {\displaystyle f(\alpha )} is a primitive element of L , which has Q as minimal polynomial . In the latter case, f ( α ) {\displaystyle f(\alpha )} belongs to a subfield of L and its minimal polynomial is the irreducible polynomial that has Q as power.
Polynomial transformations have been applied to the simplification of polynomial equations for solution, where possible, by radicals. Descartes introduced the transformation of a polynomial of degree d which eliminates the term of degree d − 1 by a translation of the roots. Such a polynomial is termed depressed . This already suffices to solve the quadratic by square roots. In the case of the cubic, Tschirnhaus transformations replace the variable by a quadratic function, thereby making it possible to eliminate two terms, and so can be used to eliminate the linear term in a depressed cubic to achieve the solution of the cubic by a combination of square and cube roots. The Bring–Jerrard transformation, which is quartic in the variable, brings a quintic into Bring-Jerrard normal form with terms of degree 5,1, and 0. | https://en.wikipedia.org/wiki/Polynomial_transformation |
In organic chemistry , a polyol is an organic compound containing multiple hydroxyl groups ( −OH ). The term "polyol" can have slightly different meanings depending on whether it is used in food science or polymer chemistry . Polyols containing two, three and four hydroxyl groups are diols , [ 1 ] triols , [ 2 ] and tetrols, [ 3 ] [ 4 ] respectively.
Polyols may be classified according to their chemistry. [ 5 ] Some of these chemistries are polyether, polyester, [ 6 ] polycarbonate [ 7 ] [ 8 ] and also acrylic polyols. [ 9 ] [ 10 ] Polyether polyols may be further subdivided and classified as polyethylene oxide or polyethylene glycol (PEG), polypropylene glycol (PPG) and Polytetrahydrofuran or PTMEG. These have 2, 3 and 4 carbons respectively per oxygen atom in the repeat unit. Polycaprolactone polyols are also commercially available. [ 11 ] There is also an increasing trend to use biobased (and hence renewable) polyols. [ 12 ] [ 13 ] [ 14 ] [ 15 ]
Polyether polyols have numerous uses. [ 16 ] [ 17 ] As an example, polyurethane foam is a big user of polyether polyols. [ 18 ]
Polyester polyols can be used to produce rigid foam. [ 19 ] [ 20 ] They are available in both aromatic and aliphatic versions. [ 21 ] [ 22 ] They are also available in mixed aliphatic-aromatic versions often made from recycled raw materials, typically polyethylene terephthalate (PET). [ 23 ]
Acrylic polyols are generally used in higher performance applications where stability to ultraviolet light is required [ 24 ] and also lower VOC coatings. [ 25 ] [ 26 ] Other uses include direct to metal coatings. [ 27 ] As they are used where good UV resistance is required, such as automotive coatings, the isocyanate component also tends to be UV resistant and hence isocyanate oligomers or prepolymers based on Isophorone diisocyanate are generally used. [ 28 ]
Caprolactone-based polyols produce polyurethanes with enhanced hydrolysis resistance. [ 29 ] [ 30 ]
Polycarbonate polyols are more expensive than other polyols and are thus used in more demanding applications. [ 31 ] [ 32 ] They have been used to make an isophorone diisocyanate based prepolymer which is then used in glass coatings. [ 33 ] They may be used in reactive hotmelt adhesives . [ 34 ]
All polyols may be used to produce polyurethane prepolymers . [ 35 ] [ 36 ] [ 37 ] These then find use in coatings , [ 38 ] adhesives , sealants and elastomers . [ 39 ]
Low molecular weight polyols are widely used in polymer chemistry where they function as crosslinking agents and chain extenders. Alkyd resins for example, use polyols in their synthesis and are used in paints and in molds for casting . They are the dominant resin or "binder" in most commercial "oil-based" coatings. Approximately 200,000 tons of alkyd resins are produced each year. They are based on linking reactive monomers through ester formation. Polyols used in the production of commercial alkyd resins are glycerol , trimethylolpropane , and pentaerythritol . [ 40 ] In polyurethane prepolymer production, a low molecular weight polyol- diol such as 1,4-butanediol may be used as a chain extender to further increase molecular weight though it does increase viscosity because more hydrogen bonding is introduced. [ 38 ]
Xylitol
Sugar alcohols , a class of low molecular weight polyols, are commonly obtained by hydrogenation of sugars. [ 41 ] : 363 They have the formula (CHOH) n H 2 , where n = 4–6. [ 42 ]
Sugar alcohols are added to foods because of their lower caloric content than sugars ; however, they are also, in general, less sweet, and are often combined with high-intensity sweeteners . They are also added to chewing gum because they are not broken down by bacteria in the mouth or metabolized to acids, and thus do not contribute to tooth decay . Maltitol , sorbitol , xylitol , erythritol , and isomalt are common sugar alcohols.
Polyether polyol
(The oxygen atoms of the ether linkages
are shown in blue.)
(The oxygen and carbon atoms
of the ester groups are shown in blue.)
The term polyol is used for various chemistries of the molecular backbone. Polyols may be reacted with diisocyanates or polyisocyanates to produce polyurethanes . MDI finds considerable use in PU foam production. [ 43 ] Polyurethanes are used to make flexible foam for mattresses and seating, rigid foam insulation for refrigerators and freezers , elastomeric shoe soles, fibers (e.g. Spandex ), coatings, sealants and adhesives . [ 44 ]
The term polyol is also attributed to other molecules containing hydroxyl groups. For instance, polyvinyl alcohol is (CH 2 CHOH) n with n hydroxyl groups where n can be in the thousands. Cellulose is a polymer with many hydroxyl groups, but it is not referred to as a polyol.
There are polyols based on renewable sources such as plant-based materials including castor oil and cottonseed oil . [ 45 ] [ 46 ] [ 47 ] Vegetable oils and biomass are also potential renewable polyol raw materials. [ 48 ] Seed oil can even be used to produce polyester polyols. [ 49 ]
Since the generic term polyol is only derived from chemical nomenclature and just indicates the presence of several hydroxyl groups, no common properties can be assigned to all polyols. However, polyols are usually viscous at room temperature due to hydrogen bonding. | https://en.wikipedia.org/wiki/Polyol |
The polyol pathway is a two-step process that converts glucose to fructose. [ 1 ] In this pathway glucose is reduced to sorbitol, which is subsequently oxidized to fructose. It is also called the sorbitol-aldose reductase pathway .
The pathway is implicated in diabetic complications, especially in microvascular damage to the retina , [ 2 ] kidney , [ 3 ] and nerves . [ 4 ]
Sorbitol cannot cross cell membranes , and, when it accumulates, it produces osmotic stresses on cells by drawing water into the insulin-independent tissues. [ 5 ]
Cells use glucose for energy . This normally occurs by phosphorylation from the enzyme hexokinase. However, if large amounts of glucose are present (as in diabetes mellitus ), hexokinase becomes saturated and the excess glucose enters the polyol pathway when aldose reductase reduces it to sorbitol. This reaction oxidizes NADPH to NADP+ . Sorbitol dehydrogenase can then oxidize sorbitol to fructose , which produces NADH from NAD+ . Hexokinase can return the molecule to the glycolysis pathway by phosphorylating fructose to form fructose-6-phosphate. However, in uncontrolled diabetics that have high blood glucose - more than the glycolysis pathway can handle - the reactions mass balance ultimately favors the production of sorbitol. [ 6 ]
Activation of the polyol pathway results in a decrease of reduced NADPH and oxidized NAD+; these are necessary co-factors in redox reactions throughout the body, and under normal conditions they are not interchangeable. The decreased concentration of these NADPH leads to decreased synthesis of reduced glutathione , nitric oxide , myo-inositol , and taurine . Myo-inositol is particularly required for the normal function of nerves. Sorbitol may also glycate nitrogens on proteins , such as collagen , and the products of these glycations are referred-to as AGEs - advanced glycation end-products . AGEs are thought to cause disease in the human body, one effect of which is mediated by RAGE (receptor for advanced glycation end-products) and the ensuing inflammatory responses induced. They are seen in the hemoglobin A1C tests performed on known diabetics to assess their levels of glucose control. [ 6 ]
While most cells require the action of insulin for glucose to gain entry into the cell, the cells of the retina , kidney , and nervous tissues are insulin-independent, so glucose moves freely across the cell membrane , regardless of the action of insulin. The cells will use glucose for energy as normal, and any glucose not used for energy will enter the polyol pathway. When blood glucose is normal (about 100 mg/dL or 5.5 mmol/L), this interchange causes no problems, as aldose reductase has a low affinity for glucose at normal concentrations . [ citation needed ]
In a hyperglycemic state, the affinity of aldose reductase for glucose rises, causing much sorbitol to accumulate, and using much more NADPH , leaving less NADPH for other processes of cellular metabolism . [ 7 ] This change of affinity is what is meant by activation of the pathway. The amount of sorbitol that accumulates, however, may not be sufficient to cause osmotic influx of water.
NADPH acts to promote nitric oxide production and glutathione reduction, and its deficiency will cause glutathione deficiency. A glutathione deficiency , congenital or acquired, can lead to hemolysis caused by oxidative stress . Nitric oxide is one of the important vasodilators in blood vessels. Therefore, NADPH prevents reactive oxygen species from accumulating and damaging cells. [ 6 ]
Excessive activation of the polyol pathway increases intracellular and extracellular sorbitol concentrations, increased concentrations of reactive oxygen species, and decreased concentrations of nitric oxide and glutathione. Each of these imbalances can damage cells; in diabetes there are several acting together. It has not been conclusively determined that activating the polyol pathway damages the microvascular systems. [ 6 ] | https://en.wikipedia.org/wiki/Polyol_pathway |
Polyoxetane ( POX ), or poly(oxetane) , is synthetic organic heteroatomic thermoplastic polymer with molecular formula (–OCH 2 CH 2 CH 2 –) n . It is polymerized from oxetane monomer , which is a four-membered cyclic ether .
Needed chemistry was observed and developed through the 1930s and 1940s. The very first polymerized oxetane was 3,3-bis(chloromethyl)oxetane followed by other 3,3-disubstituted derivatives during the 1950s. [ 1 ] Unsubstituted oxetane itself was polymerized in 1956.
Tens of oxetane derivatives have been synthesized and many of them are polymerizable. [ 2 ] Reasons for inability to polymerize are different basicity and ring strain caused by different electron and bulkiness of substituents also as their position. Major 3-substituted and 3,3-disubstituted monomers are summarized in the oxetane article.
Ring strain of unsubstituted oxetane is 107 kJ/mol. [ 2 ] That is twenty [ 2 ] times more, than non-polymerizable six-membered tetrahydropyran . Oxetane polymerizes via a cationic , ring-openning mechanism. Special oxetanes are polymerizable by other mechanisms. [ 3 ] [ 4 ]
The propagation centre is a tertiary oxonium ion , mainly initialized by Lewis acids , trialkyl oxonium salts, carbocationic salts and others. Strong acids tend to generate secondary oxonium ions, which are unreactive, thus they are not initiators of first choice. On the other hand super acids (eg. HSO 3 F) are effective initiators of cationic polymerization of cyclic ethers, such as oxetane. For sufficient stability of propagation centre, a counterion X – of low nucleophilicity is required, such as SbCl 6 – , PF 6 – , AsF 6 – or SbF 6 – . [ 5 ] [ 6 ] First polymerizations were conducted with compounds consisting of BF 4 – or BF 3 OH – counterions. [ 2 ] Propagation is very fast and thus preparation of lower molecular weight products (also with desired functional end groups) wasnlt performed until today. [ when? ] [ 2 ]
Unsymmetrically substituted oxetanes polymerizes according to ability of attacking one or both alpha-carbons of the propagation centre. Unsubstituted and 3-substituted derivatives polymerize in symmetrical manner, but 2-substituted derivatives can form any of the basic types of polymer chain connections ( head-to-tail , head-to-head and tail-to-tail) . However, with right conditions and initiation system used, a stereospecific propagation can be achieved. [ 7 ]
Oxygen atoms of the main chain possess enough reactivity to attack oxonium propagation centre to either form cyclic oligomers (usually tetramers [ 8 ] ) or to depolymerize.
These reactions within one molecule are referred as backbitting . During polymerization of unsubstituted oxetane, mutual attack of two growing chains may occur, in very small number, to form acyclic oxonium ions. This process is so called temporary termination . [ 9 ]
Mentioned side reactions compete in speed with propagation. The faster the propagation, the less side reactions take place. Speed of propagation depends on polymerized monomer, initiation system used and polymerization conditions set.
Polymerization is conducted in mixture of methylene chloride and petrol in -25 °C for 4 to 8 hours to obtain suspension of polymer. Catalytic system consists of 1-2 % BF 3 and 0,1-0,4 % epichlorhydrin which acts as a cocatalyst. Final suspension is neutralised, stripped by water steam, filtered, washed and dried. [ 10 ]
A series of substituted oxetanes have been synthesized and polymerized. The very first polymerized oxetane was 3,3-bis(chloromethyl)oxetane. [ 2 ]
Polyoxetanes can be liquids or solids with high range of crystallinity and melting temperature. Final material characteristics depend on symmetry, bulkiness and polarity of the substituents. [ 2 ] For example, melting temperature of POX is 35 °C. One methyl substituent in position 2 or 3 ensures amorphous character of polymethyloxetanes. [ 11 ] Oxetanes symmetrically bisubstituted on the same carbon, give crystalline polymers, such as 3,3-dimethyloxetane. Melting point of poly(3,3-dimethyloxetane) is 47 °C. Halogens increase melting point of oxetane polymers. The bigger halogen atom, the higher melting temperature is. Melting temperature of halogenated oxetanes vary from 135 to 290 °C. [ 11 ] Amorphous low melting oxetanes are soluble in common organic solvents, on the other hand crystalline are not. [ 12 ]
Butyllithium has been used to break up polyoxetane to lower molecular weight POX glycols with hydroxyl (–OH) functional end groups. With the same result, degradation with ozone followed by reduction by LiAlH 4 can be used. [ 13 ] Polyoxetane glycols can be used for manufacturing of polyurethane networks [ 14 ] and preparation of copolymers. [ 15 ]
Two main reasons to copolymerize oxetanes are adjustment of crystallinity and modification of material properties.
Oxetanes are copolymerized mainly with tetrahydrofuran (THF) to produce precursors of soft segments of polyurethanes (PUR), polyethers and polyamide elastomers. [ 16 ] Particularly statistic copolymer of BCMO and THF is amorphous, tough rubber. [ 17 ] Unhomopolymerizable derivatives of oxetane are able to copolymerize with homopolymerizable oxetanes. Most studied monomer in copolymerization problemstics have been BCMO. [ 18 ] Also copolymers with thermoplastic elastomer behavior have been prepared. [ 15 ]
Polyoxetanes are engineering polymers. Only one oxetane polymer, derived from 3,3–bis(chloromethyl)oxetane (BCMO) had industrial application. It was available under trade mark Penton by Hercules, Inc. (USA) and Pentaplast (Russia). Main use were sterilizable goods because of relatively high heat-distortion temperature and low water absorption. BCMO is self extinguishing (because of chlorine atoms present in polymer chain) and is highly chemically resistant. It stands up to most organic solvents and strong alkali. It dissolves in strong acids, such as concentrated HNO 3 or H 2 SO 4 . [ 19 ] A typical number-average molecular weight range between 250 000 and 350 000 g/mol. It can be conventionally processed via injection moulding. Moulded goods exhibit low shrinkage and fantastic dimensional stability in general. [ 19 ]
Examples of parts that can be constructed from costly PBCMO are bearings, valves, parts for fitting cables and electrical parts, etc. It is a very good anti-corrosive coating with guarantee of corrosive stability with main use for chemical tanks. [ 20 ] It's great material for desalination membranes. [ 21 ] Perfluorinated oxetanes (–CF 2 CF 2 CF 2 O–) n exhibit great friction-reducing properties [ 22 ] and are potentially useful for gas separation membranes [ 23 ]
Significant part of oxetanes are turned into polyoxetanes glycols and other polymeric materials. [ 24 ]
By replacing hydrogen(s) in position 3 by electron deficient groups, energetic polymers can be prepared. Desired functional groups are ethyl (CH3–CH 2 –), nitro (NO 2 –) or 2-oxa-4,4-dinitropentyl (CH 3 –C(NO 2 ) 2 –CH 2 –O–CH 2 –). Energetic polymers can be used as explosives and propellants or they are precursors for manufacturing of mentioned above. They burn with a great deal of smoke. | https://en.wikipedia.org/wiki/Polyoxetane |
Polyoxins are a group of peptidyl nucleoside antibiotics . [ 1 ] [ 2 ] They are a complex produced by Streptomyces cacaoi var. asoensis and S . piomogenus. [ 3 ] Polyoxin compounds contain the same base structure but differ in the composition of certain functional groups. At least fifteen polyoxin compounds are known, designated as polyoxin A, B, C and so forth. Polyoxins A through O are known and all except for 'C' and 'I' have fungicidal activity against phytopathogenic fungi. [ 3 ] Some polyoxins have been used as agricultural fungicides because of this.
Polyoxins work by inhibiting the biosynthesis of chitin . [ 4 ] [ 2 ]
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Polyoxins |
In chemistry , a polyoxometalate (abbreviated POM ) is a polyatomic ion , usually an anion , that consists of three or more transition metal oxyanions linked together by shared oxygen atoms to form closed 3-dimensional frameworks. The metal atoms are usually group 6 (Mo, W) or less commonly group 5 ( V , Nb , Ta ) and group 7 ( Tc , Re ) transition metals in their high oxidation states . Polyoxometalates are often colorless, orange or red diamagnetic anions . Two broad families are recognized, isopolymetalates, composed of only one kind of metal and oxide , and heteropolymetalates , composed of one or more metals, oxide, and eventually a main group oxyanion ( phosphate , silicate , etc.). Many exceptions to these general statements exist. [ 1 ] [ 2 ]
The oxides of d 0 metals such as V 2 O 5 , MoO 3 , WO 3 dissolve at high pH to give orthometalates, VO 3− 4 , MoO 2− 4 , WO 2− 4 . For Nb 2 O 5 and Ta 2 O 5 , the nature of the dissolved species at high pH is less clear, but these oxides also form polyoxometalates.
As the pH is lowered, orthometalates protonate to give oxide–hydroxide compounds such as WO 3 (OH) − and VO 3 (OH) 2− . These species condense via the process called olation . The replacement of terminal M=O bonds, which in fact have triple bond character, is compensated by the increase in coordination number. The nonobservation of polyoxochromate cages is rationalized by the small radius of Cr(VI), which may not accommodate octahedral coordination geometry. [ 1 ]
Condensation of the MO 3 (OH) n − species entails loss of water and the formation of M−O−M linkages. The stoichiometry for hexamolybdate is shown: [ 3 ]
An abbreviated condensation sequence illustrated with vanadates is: [ 1 ] [ 4 ]
When such acidifications are conducted in the presence of phosphate or silicate , heteropolymetalate result. For example, the phosphotungstate anion [PW 12 O 40 ] 3− consists of a framework of twelve octahedral tungsten oxyanions surrounding a central phosphate group.
Ammonium phosphomolybdate , [PMo 12 O 40 ] 3− anion, was reported in 1826. [ 5 ] The isostructural phosphotungstate anion was characterized by X-ray crystallography 1934. This structure is called the Keggin structure after its discoverer. [ 6 ]
The 1970s witnessed the introduction of quaternary ammonium salts of POMs. [ 3 ] This innovation enabled systematic study without the complications of hydrolysis and acid/base reactions. The introduction of 17 O NMR spectroscopy allowed the structural characterization of POMs in solution. [ 7 ]
Ramazzoite , the first example of a mineral with a polyoxometalate cation, was described in 2016 in Mt. Ramazzo Mine, Liguria, Italy . [ 8 ]
The typical framework building blocks are polyhedral units, with 6-coordinate metal centres. Usually, these units share edges and/or vertices. The coordination number of the oxide ligands varies according to their location in the cage. Surface oxides tend to be terminal or doubly bridging oxo ligands . Interior oxides are typically triply bridging or even octahedral. [ 1 ] POMs are sometimes viewed as soluble fragments of metal oxides . [ 7 ]
Recurring structural motifs allow POMs to be classified. Iso -polyoxometalates (isopolyanions) feature octahedral metal centers. The heteropolymetalates form distinct structures because the main group center is usually tetrahedral. The Lindqvist and Keggin structures are common motifs for iso- and heteropolyanions, respectively.
Polyoxometalates typically exhibit coordinate metal-oxo bonds of different multiplicity and strength. In a typical POM such as the Keggin structure [PW 12 O 40 ] 3− , each addenda center connects to single terminal oxo ligand, four bridging μ 2 -O ligands and one bridging μ 3 -O deriving from the central heterogroup. [ 9 ] Metal–metal bonds in polyoxometalates are normally absent and owing to this property, F. Albert Cotton opposed to consider polyoxometalates as form of cluster materials . [ 10 ] However, metal-metal bonds are not completely absent in polyoxometalates and they are often present among the highly reduced species. [ 11 ]
The polymolybdates and polytungstates are derived, formally at least, from the dianionic [MO 4 ] 2- precursors. The most common units for polymolybdates and polyoxotungstates are the octahedral {MO 6 } centers, sometimes slightly distorted. Some polymolybdates contain pentagonal bipyramidal units. These building blocks are found in the molybdenum blues , which are mixed valence compounds . [ 1 ]
Polyoxotechnetates form only in strongly acidic conditions, such as in HTcO 4 or trifluoromethanesulfonic acid solutions. The first empirically isolated polyoxotechnetate was the red [Tc 20 O 68 ] 4− . It contains both Tc(V) and Tc(VII) in ratio 4: 16 and is obtained as the hydronium salt [H 7 O 3 ] 4 [Tc 20 O 68 ]·4H 2 O by concentrating an HTcO 4 solution. [ 12 ] Corresponding ammonium polyoxotechnetate salt was recently isolated from trifluoromethanesulfonic acid and it has very similar structure. [ 13 ] The only polyoxorhenate formed in acidic conditions in presence of pyrazolium cation. The first empirically isolated polyoxorhenate was the white [Re 4 O 15 ] 2− . It contains Re(VII) in both octahedral and tetrahedral coordination. [ 14 ]
Mixed polyoxo(technetate-rhenate) [Tc 4 O 4 (H 2 O) 2 (ReO 4 ) 14 ] 2- polyanion crystals that contain Tc(V) and Re(VII)were also isolated [ 15 ] and structurally characterized.
The polyniobates, polytantalates, and vanadates are derived, formally at least, from highly charged [MO 4 ] 3- precursors. For Nb and Ta, most common members are M 6 O 8− 19 (M = Nb, Ta), which adopt the Lindqvist structure. These octaanions form in strongly basic conditions from alkali melts of the extended metal oxides (M 2 O 5 ), or in the case of Nb even from mixtures of niobic acid and alkali metal hydroxides in aqueous solution. The hexatantalate can also be prepared by condensation of peroxotantalate Ta(O 2 ) 3− 4 in alkaline media. [ 16 ] These polyoxometalates display an anomalous aqueous solubility trend of their alkali metal salts inasmuch as their Cs + and Rb + salts are more soluble than their Na + and Li + salts. The opposite trend is observed in group 6 POMs. [ 17 ]
The decametalates with the formula M 10 O 6− 28 (M = Nb, [ 18 ] Ta [ 19 ] ) are isostructural with decavanadate. They are formed exclusively by edge-sharing {MO 6 } octahedra (the structure of decatungstate W 10 O 4− 32 comprises edge-sharing and corner-sharing tungstate octahedra).
Heteroatoms aside from the transition metal are a defining feature of heteropolymetalates . Many different elements can serve as heteroatoms but most common are PO 3− 4 , SiO 4− 4 , and AsO 3− 4 .
Polyoxomolybdates include the wheel-shaped molybdenum blue anions and spherical keplerates. The cluster [Mo 154 O 420 (NO) 14 (OH) 28 (H 2 O) 70 ] 20− consists of more than 700 atoms and is the size of a small protein. The anion is in the form of a tire (the cavity has a diameter of more than 20 Å) and an extremely large inner and outer surface. The incorporation of lanthanide ions in molybdenum blues is particularly intriguing. [ 20 ] Lanthanides can behave like Lewis acids and perform catalytic properties. [ 21 ] Lanthanide -containing polyoxometalates show chemoselectivity [ 22 ] and are also able to form inorganic–organic adducts, which can be exploited in chiral recognition. [ 23 ]
Oxoalkoxometalates are clusters that contain both oxide and alkoxide ligands. [ 24 ] Typically they lack terminal oxo ligands. Examples include the dodecatitanate Ti 12 O 16 (OPri) 16 (where OPri stands for an alkoxy group), [ 25 ] the iron oxoalkoxometalates [ 26 ] and iron [ 27 ] and copper [ 28 ] Keggin ions.
The terminal oxide centers of polyoxometalate framework can in certain cases be replaced with other ligands, such as S 2− , Br − , and NR 2− . [ 5 ] [ 29 ] Sulfur-substituted POMs are called polyoxothiometalates . Other ligands replacing the oxide ions have also been demonstrated, such as nitrosyl and alkoxy groups. [ 24 ] [ 30 ]
Polyfluoroxometalate are yet another class of O-replaced oxometalates. [ 31 ]
Numerous hybrid organic–inorganic materials that contain POM cores, [ 32 ] [ 33 ] [ 34 ]
Illustrative of the diverse structures of POM is the ion CeMo 12 O 8− 42 , which has face-shared octahedra with Mo atoms at the vertices of an icosahedron. [ 35 ]
POMs are employed as commercial catalysts for oxidation of organic compounds. [ 36 ] [ 37 ]
Efforts continue to extend this theme. POM-based aerobic oxidations have been promoted as alternatives to chlorine -based wood pulp bleaching processes, [ 38 ] a method of decontaminating water, [ 39 ] and a method to catalytically produce formic acid from biomass ( OxFA process ). [ 40 ] Polyoxometalates have been shown to catalyse water splitting . [ 41 ]
Some POMs exhibit unusual magnetic properties, [ 42 ] which has prompted visions of many applications. One example is storage devices called qubits . [ 43 ] non-volatile (permanent) storage components, also known as flash memory devices. [ 44 ] [ 45 ]
Potential antitumor [ 46 ] and antiviral drugs. [ 47 ] The Anderson-type polyoxomolybdates and heptamolybdates exhibit activity for suppressing the growth of some tumors. In the case of (NH 3 Pr) 6 [Mo 7 O 24 ], activity appears related to its redox properties. [ 48 ] [ 49 ] The Wells-Dawson structure can efficiently inhibit amyloid β (Aβ) aggregation in a therapeutic strategy for Alzheimer's disease. [ 50 ] [ 51 ] antibacterial [ 52 ] and antiviral uses. | https://en.wikipedia.org/wiki/Polyoxometalate |
Polypeptoids are a class of peptidomimetic polymers. They are based on an N-substituted glycine backbone, the side chains are attached directly to the polymer backbone via the nitrogen of the amide group , rather than at the α-carbon as in polypeptides. As opposed to polypeptides , polypeptoids have an achiral backbone devoid of hydrogen bond donors, which makes them easy to treat while being unable to form secondary structures such as helix .
The chemical and structural diversity of polypeptoids have enabled access to and adjustment of a variety of physicochemical and biological properties (e.g., solubility, charge characteristics, chain conformation, Hydrophilic-Lipophilic-Balance , thermal processability, degradability, cytotoxicity and immunogenicity).
The structure of polypeptoids combines many of the advantageous properties of bulk polymers with those of synthetically produced proteins . These attributes have made this synthetic polymer platform a potential candidate for various biomedical applications such as encapsulation, and biotechnological applications such as biomaterials.
The main characteristic of polypeptoids is their high resistance to proteolytic enzymes. The absence of hydrogen on the amide nitrogen prevents proteases and peptidases, which recognise and degrade natural polypeptides, from hydrolysing polypeptoids. The inductive electron-donating effects (+I) provide stability, improving their resistance to proteases and temperature, and enabling them to withstand different biological conditions without undergoing rapid degradation. This means that polypeptoids can be used for medical purposes where a long duration of action is required.
In addition, they have good resistance to extreme chemical conditions, such as pH variations, high temperatures and exposure to certain solvents . They are therefore a good alternative to polypeptides , which cannot be used in certain situations, where they would be rapidly degraded.
The difference in structure between a peptoid and an α-peptide lies in the location of the side chain. In a peptide, this chain is located on the amide nitrogen, whereas in an α-peptide it's on the α-carbon.
This difference in structure makes the polypeptide backbone achiral . Because the amide bonds are tertiary, they can undergo isomerization between trans and cis conformations much more easily than the secondary amides of α-peptides.
Moreover, without the amide protons, the secondary structure cannot be stabilized by backbone hydrogen bonding in the same way as with peptides. Polypeptoids therefore possess greater conformational flexibility than polypeptides . Indeed, we observe that the absence of hydrogen bonds between NH and CO in the main chain prevents secondary structures (notably β-sheets or α-helices ) from being formed. Thanks to their flexibility, polypeptoids are capable of self-assembling into various nanostructures , such as micelles or nanometric polymers that are used to form films and hydrogels. These might be useful properties for applications in materials engineering and medicine.
Polypeptoids possess a great flexibility in terms of solubility . By varying the nature of the side chains, it is possible to form water-soluble, amphiphilic or hydrophobic polypeptoids. They are too biocompatible and not very immunogenic . This means that when they are introduced into a living organism, they have the advantage of not generating an undesirable immune response . This characteristic is necessary for their use in the biomedical field, particularly for medical implants.
The Zuckermann method, developed in the liquid phase, consists of a first acylation step using haloacetic acid, followed by a second SN2 step with a primary amine as nucleophile . This method allows the synthesis of polymers containing up to 50 units. It produces highly pure compounds (≥ 95%) at low cost and from inexpensive building blocks. Finally, to cleave the polypeptoids from the resin, the use of TFA is required.
The ROP NNCA method (Ring-Opening Polymerization of N-Substituted N-carboxyanhydride ) involves the opening of an N-Substituted N-carboxyanhydride by an amine. It enables the creation of polypeptoids in various forms, whether linear or cyclic.
The mechanism begins with an initiation step involving an addition/elimination reaction of a nitrogen- containing group (R can vary: cycle, alkyl chain, etc.) on a carbonyl, followed by a second decarboxylation step. However, this highly reactive technique is susceptible to nucleophilic impurities, which can lead to undesirable polymerization initiations.
Developed by Zhang and other researchers, this technique uses N-heterocyclic carbenes (NHCs) as nucleophilic initiators. These initiators also act as polymerization mediators by serving as counterions . This mechanism consists of an initiation step, where the NNCA/NCA ring is opened by the NHC, followed by a propagation step, and finally a termination step (termination agents: water, alcohol, etc.). The major advantage of this method is its ability to produce high-molecular-weight cyclic compounds while preventing side reactions through Coulombic attraction between the two chain ends.
Polypeptoids can be used in the pharmaceutical field, especially in the form of hydrophobic polymers HMPs. HMPs are hydrophobically modified polypeptoids that contain up to 100 monomeric units. The nitrogen atoms of these polymers are functionalized with alkyl groups, which allows to constitute a hydrophobic element, while the rest of the molecule is a highly soluble backbone.
HMPs can thus insert their hydrophobic segments into vesicular lipid bilayers , leading to destabilization of these structures and vesicle rupture, unlike the detergents which will transform the vesicles into mixed micelles of lipids and detergents. At low HMP concentrations, this rupture leads to the creation of large fragments that anchor to intact vesicles, thanks to hydrophobic interactions . At high HMP concentrations, all vesicles rupture into smaller HMP-lipid fragments around 10 nm. These polymer-lipid nano fragments can be used to maintain highly hydrophobic drug species in solution.
In the pharmaceutical field, HMPs are used in the design of new drug delivery systems, through interaction with liposomes , allowing modification of their behavior.
These systems can indeed deliver hydrophilic molecules by encapsulation in the aqueous core of the liposome or hydrophobic drug species by integration into the lipid bilayer. This second alternative is particularly used in cancer treatment.
These HMP molecules are used, for example, to encapsulate in HMP-lipid fragments highly hydrophobic drugs such as SF, a protein tyrosine kinase inhibitor approved by the FDA for the treatment of renal cell carcinoma , thyroid and liver cancer .
Research has therefore led to the design of an HMP containing 74% nitrogen functionalized with the neutral N-methoxyethyl group (MeOEt) and 26% functionalized with the N-n-decyl group (C10). This polymer has a molar mass of 13.9 kDa and remains soluble in water, but can perform hydrophobic interactions with liposomes.
These systems thus allow the absorption of SFs into cells, facilitated by the presence of free hydrophobic groups on the HMP, thus allowing their insertion into cell membranes and facilitating endocytic pathways for entry. | https://en.wikipedia.org/wiki/Polypeptoids |
Feeding is the process by which organisms, typically animals , obtain food . Terminology often uses either the suffixes -vore, -vory, or -vorous from Latin vorare , meaning "to devour", or -phage, -phagy, or -phagous from Greek φαγεῖν ( phagein ), meaning "to eat".
The evolution of feeding is varied with some feeding strategies evolving several times in independent lineages. In terrestrial vertebrates, the earliest forms were large amphibious piscivores 400 million years ago. While amphibians continued to feed on fish and later insects, reptiles began exploring two new food types, other tetrapods (carnivory), and later, plants (herbivory). Carnivory was a natural transition from insectivory for medium and large tetrapods, requiring minimal adaptation (in contrast, a complex set of adaptations was necessary for feeding on highly fibrous plant materials). [ 1 ]
The specialization of organisms towards specific food sources is one of the major causes of evolution of form and function, such as:
There are many modes of feeding that animals exhibit, including:
Polyphagy is the habit in an animal species, of eating and tolerating a relatively wide variety of foods, whereas monophagy is the intolerance of every food except for one specific type (see generalist and specialist species ). Oligophagy is a term for intermediate degrees of selectivity, referring to animals that eat a relatively small range of foods, either because of preference or necessity. [ 2 ]
Another classification refers to the specific food animals specialize in eating, such as:
The eating of non-living or decaying matter:
There are also several unusual feeding behaviours, either normal, opportunistic , or pathological, such as:
An opportunistic feeder sustains itself from a number of different food sources, because the species is behaviourally sufficiently flexible.
Some animals exhibit hoarding and caching behaviours in which they store or hide food for later use. | https://en.wikipedia.org/wiki/Polyphagy |
Polypharmacology is the design or use of pharmaceutical agents that act on multiple targets or disease pathways. [ 1 ]
Despite scientific advancements and an increase of global R&D spending, drugs are frequently withdrawn from markets. This is primarily due to their side effects or toxicities. Drug molecules often interact with multiple targets and the unintended drug-target interactions can cause side effects. Polypharmacology remains to be one of the major challenges in drug development, and it opens novel avenues to rationally design the next generation of more effective but less toxic therapeutic agents. [ 2 ] Polypharmacology suggests that more effective drugs can be developed by specifically modulating multiple targets. [ 3 ] [ 4 ]
It is generally thought that complex diseases such as cancer and central nervous system diseases may require complex therapeutic approaches. In this respect, a drug that "hits" multiple sensitive nodes belonging to a network of interacting targets offers the potential for higher efficacy and may limit drawbacks generally arising from the use of a single-target drug or a combination of multiple drugs. [ 5 ] In contrast, chemical biology continues to be a reductionist discipline, still regarding chemical probes as highly selective small molecules that enable the modulation and study of one specific target. Chemical biology cannot continue to overlook the existence of polypharmacology text [ according to whom? ] and its urge to become a more holistic discipline that looks at the use of tool compounds from a systems perspective. [ 6 ] The use of chemoproteomics offers strategies to develop a more holistic understanding of the proteome-wide range of targets a drug interacts with. [ 7 ]
The primordial idea of polypharmacology was first proposed in 2004 by Bryan Roth . [ 8 ] He reasoned that most common central nervous system disorders are polygenic in origin, and attempts to develop more effective treatments for diseases such as schizophrenia and depression by discovering drugs selective for single molecular targets ('magic bullets') have been largely unsuccessful. He therefore proposed a proof of concept that designing selectively non-selective drugs (that is, 'magic shotguns') that interact with several molecular targets will lead to new and more effective medications for a variety of central nervous system disorders.
A similar concept was independently proposed in the year of 2006 by Professor Zhiguo Wang [ 9 ] who used the term 'single agent–multiple targets' (SAMT) to describe the same principle as 'magic shotguns' and his research team provided the first experimental evidence for the feasibility, effectiveness and advantages of SAMT, specifically the 'complex decoy oligodeoxynucleotides technology cdODN' attacking multiple target transcription factors, in the treatment of xenograft breast cancer in mice. Subsequently, Wang's team extended the SAMT to designing single agent that can act on multiple miRNAs targeting cancer cells and cardiac pacemaker channel genes and calcium channel genes as a new therapeutic approach. [ 10 ] [ 11 ] [ 12 ] Wang has published two monographs on polypharmacology:<Polypharmacology: Principles and Methodologies> [ 13 ] and <Anti-Aging Polypharmacology> Wang's work is now categorized as 'Epigenetic Polypharmacology' or 'Targeted Polypharmacology', a branch of Polypharmacology. [ 14 ] In 2008, Professor Keven Shokat and his colleagues described a single compound that blocks the proliferation of tumor cells by direct inhibition of oncogenic tyrosine kinases and phosphatidylinositol-3-OH kinases and termed it 'multitargeted drug' along with the concept of 'Polypharmacology'. [ 15 ] Since then, Polypharmacology has become a new branch of Pharmacology discipline and research field as well as one of the new direction and strategies for drug development. [ 16 ] | https://en.wikipedia.org/wiki/Polypharmacology |
Polypharmacy (polypragmasia) is an umbrella term to describe the simultaneous use of multiple medicines by a patient for their conditions. [ 1 ] [ 2 ] [ 3 ] The term polypharmacy is often defined as regularly taking five or more medicines but there is no standard definition and the term has also been used in the context of when a person is prescribed 2 or more medications at the same time. [ 1 ] [ 4 ] [ 5 ] Polypharmacy may be the consequence of having multiple long-term conditions, also known as multimorbidity and is more common in the elderly. [ 6 ] [ 7 ] In some cases, an excessive number of medications at the same time is worrisome, especially for people who are older with many chronic health conditions, because this increases the risk of an adverse event in that population. [ 8 ] [ 9 ] In many cases, polypharmacy cannot be avoided, but 'appropriate polypharmacy' practices are encouraged to decrease the risk of adverse effects. [ 10 ] Appropriate polypharmacy is defined as the practice of prescribing for a person who has multiple conditions or complex health needs by ensuring that medications prescribed are optimized and follow 'best evidence' practices. [ 10 ]
The prevalence of polypharmacy is estimated to be between 10% and 90% depending on the definition used, the age group studied, and the geographic location. [ 11 ] Polypharmacy continues to grow in importance because of aging populations . Many countries are experiencing a fast growth of the older population, 65 years and older. [ 12 ] [ 13 ] [ 14 ] This growth is a result of the baby-boomer generation getting older and an increased life expectancy as a result of ongoing improvement in health care services worldwide. [ 15 ] [ 16 ] About 21% of adults with intellectual disability are also exposed to polypharmacy. [ 17 ] The level of polypharmacy has been increasing in the past decades. Research in the USA shows that the percentage of patients greater than 65 years-old using more than 5 medications increased from 24% to 39% between 1999 and 2012. [ 18 ] Similarly, research in the UK found that the number of older people taking 5 plus medication had quadrupled from 12% to nearly 50% between 1994 and 2011. [ 19 ]
Polypharmacy is not necessarily ill-advised, but in many instances can lead to negative outcomes or poor treatment effectiveness, often being more harmful than helpful or presenting too much risk for too little benefit . Therefore, health professionals consider it a situation that requires monitoring and review to validate whether all of the medications are still necessary. Concerns about polypharmacy include increased adverse drug reactions , drug interactions , prescribing cascade , and higher costs. [ 20 ] A prescribing cascade occurs when a person is prescribed a drug and experiences an adverse drug effect that is misinterpreted as a new medical condition, so the patient is prescribed another drug. [ 21 ] Polypharmacy also increases the burden of medication taking particularly in older people and is associated with medication non-adherence . [ 22 ]
Polypharmacy is often associated with a decreased quality of life , including decreased mobility and cognition . [ 23 ] Patient factors that influence the number of medications a patient is prescribed include a high number of chronic conditions requiring a complex drug regimen. Other systemic factors that impact the number of medications a patient is prescribed include a patient having multiple prescribers and multiple pharmacies that may not communicate.
Whether or not the advantages of polypharmacy (over taking single medications or monotherapy ) outweigh the disadvantages or risks depends upon the particular combination and diagnosis involved in any given case. [ 24 ] The use of multiple drugs, even in fairly straightforward illnesses, is not an indicator of poor treatment and is not necessarily overmedication . Moreover, it is well accepted in pharmacology that it is impossible to accurately predict the side effects or clinical effects of a combination of drugs without studying that particular combination of drugs in test subjects. Knowledge of the pharmacologic profiles of the individual drugs in question does not assure accurate prediction of the side effects of combinations of those drugs; and effects also vary among individuals because of genome -specific pharmacokinetics . Therefore, deciding whether and how to reduce a list of medications ( deprescribe ) is often not simple and requires the experience and judgment of a practicing clinician, as the clinician must weigh the pros and cons of keeping the patient on the medication. However, such thoughtful and wise review is an ideal that too often does not happen, owing to problems such as poorly handled care transitions (poor continuity of care, usually because of siloed information ), overworked physicians and other clinical staff, and interventionism .
While polypharmacy is typically regarded as undesirable, prescription of multiple medications can be appropriate and therapeutically beneficial in some circumstances. [ 25 ] “Appropriate polypharmacy” is described as prescribing for complex or multiple conditions in such a way that necessary medicines are used based on the best available evidence at the time to preserve safety and well-being. [ 25 ] Polypharmacy is clinically indicated in some chronic conditions, for example in diabetes mellitus , but should be discontinued when evidence of benefit from the prescribed drugs no longer outweighs potential for harm (described below in Contraindications). [ 25 ]
Often certain medications can interact with others in a positive way specifically intended when prescribed together, to achieve a greater effect than any of the single agents alone. This is particularly prominent in the field of anesthesia and pain management – where atypical agents such as antiepileptics , antidepressants , muscle relaxants , NMDA antagonists , and other medications are combined with more typical analgesics such as opioids , prostaglandin inhibitors, NSAIDS and others. This practice of pain management drug synergy [ 26 ] is known as an analgesia sparing effect.
People who are at greatest risk for negative polypharmacy consequences include elderly people, people with psychiatric conditions, patients with intellectual or developmental disabilities, [ 29 ] people taking five or more drugs at the same time, those with multiple physicians and pharmacies , people who have been recently hospitalized, people who have concurrent comorbidities , [ 30 ] people who live in rural communities, people with inadequate access to education, [ 31 ] and those with impaired vision or dexterity. Marginalized populations may have a greater degrees of polypharmacy, which can occur more frequently in younger age groups. [ 32 ]
It is not uncommon for people who are dependent or addicted to substances to enter or remain in a state of polypharmacy misuse. [ 33 ] About 84% of prescription drug misusers reported using multiple drugs. [ 33 ] Note, however, that the term polypharmacy and its variants generally refer to legal drug use as-prescribed, even when used in a negative or critical context.
Measures can be taken to limit polypharmacy to its truly legitimate and appropriate needs. This is an emerging area of research, frequently called deprescribing . [ 34 ] Reducing the number of medications, as part of a clinical review, can be an effective healthcare intervention. [ 35 ] Clinical pharmacists can perform drug therapy reviews and teach physicians and their patients about drug safety and polypharmacy, as well as collaborating with physicians and patients to correct polypharmacy problems. Similar programs are likely to reduce the potentially deleterious consequences of polypharmacy such as adverse drug events, non-adherence, hospital admissions, drug-drug interactions, geriatric syndromes, and mortality. [ 36 ] Such programs hinge upon patients and doctors informing pharmacists of other medications being prescribed, as well as herbal, over-the-counter substances and supplements that occasionally interfere with prescription-only medication. Staff at residential aged care facilities have a range of views and attitudes towards polypharmacy that, in some cases, may contribute to an increase in medication use. [ 37 ]
The risk of polypharmacy increases with age, although there is some evidence that it may decrease slightly after age 90 years. [ 2 ] Poorer health is a strong predictor of polypharmacy at any age, although it is unclear whether the polypharmacy causes the poorer health or if polypharmacy is used because of the poorer health. [ 2 ] It appears possible that the risk factors for polypharmacy may be different for younger and middle-aged people compared to older people. [ 2 ]
The use of polypharmacy is correlated to the use of potentially inappropriate medications. Potentially inappropriate medications are generally taken to mean those that have been agreed upon by expert consensus, such as by the Beers Criteria . These medications are generally inappropriate for older adults because the risks outweigh the benefits. [ 38 ] Examples of these include urinary anticholinergics used to treat incontinence ; the associated risks, with anticholinergics, include constipation, blurred vision, dry mouth, impaired cognition, and falls. [ 39 ] Many older people living in long term care facilities experience polypharmacy, and under-prescribing of potentially indicated medicines and use of high risk medicines can also occur. [ 38 ] Medicine use rises from 6.0 ± 3.8 regular medicines on average when people enter long term care to 8.9 ± 4.1 regular medicines after two years. [ 40 ]
Polypharmacy is associated with an increased risk of falls in elderly people. [ 41 ] [ 42 ] Certain medications are well known to be associated with the risk of falls, including cardiovascular and psychoactive medications . [ 43 ] [ 44 ] There is some evidence that the risk of falls increases cumulatively with the number of medications. [ 45 ] [ 46 ] Although often not practical to achieve, withdrawing all medicines associated with falls risk can halve an individual's risk of future falls.
Every medication has potential adverse side-effects. With every drug added, there is an additive risk of side-effects. Also, some medications have interactions with other substances, including foods, other medications, and herbal supplements. [ 47 ] 15% of older adults are potentially at risk for a major drug-drug interaction. [ 48 ] Older adults are at a higher risk for a drug-drug interaction due to the increased number of medications prescribed and metabolic changes that occur with aging. [ 49 ] When a new drug is prescribed, the risk of interactions increases exponentially. Doctors and pharmacists aim to avoid prescribing medications that interact; often, adjustments in the dose of medications need to be made to avoid interactions. For example, warfarin interacts with many medications and supplements that can cause it to lose its effect. [ 49 ] [ 50 ]
Pill burden is the number of pills (tablets or capsules, the most common dosage forms) that a person takes on a regular basis, along with all associated efforts that increase with that number — like storing, organizing, consuming, and understanding the various medications in one's regimen. The use of individual medications is growing faster than pill burden. [ 51 ] A recent study found that older adults in long term care are taking an average of 14 to 15 tablets every day. [ 52 ]
Poor medical adherence is a common challenge among individuals who have increased pill burden and are subject to polypharmacy. [ 53 ] It also increases the possibility of adverse medication reactions ( side effects ) and drug-drug interactions . High pill burden has also been associated with an increased risk of hospitalization, medication errors, and increased costs for both the pharmaceuticals themselves and for the treatment of adverse events. Finally, pill burden is a source of dissatisfaction for many patients and family carers. [ 22 ]
High pill burden was commonly associated with antiretroviral drug regimens to control HIV , [ 54 ] and is also seen in other patient populations. [ 53 ] For instance, adults with multiple common chronic conditions such as diabetes , hypertension , lymphedema , hypercholesterolemia , osteoporosis , constipation , inflammatory bowel disease , and clinical depression may be prescribed more than a dozen different medications daily. [ 55 ] The combination of multiple drugs has been associated with an increased risk of adverse drug events. [ 56 ]
Reducing pill burden is recognized as a way to improve medication compliance , also referred to as adherence. This is done through " deprescribing ", where the risks and benefits are weighed when considering whether to continue a medication. [ 57 ] This includes drugs such as bisphosphonates (for osteoporosis ), which are often taken indefinitely although there is only evidence to use it for five to ten years. [ 57 ] Patient educational programs, reminder messages, medication packaging, and the use of memory tricks has also been seen to improve adherence and reduce pill burden in several countries. [ 49 ] These include associating medications with mealtimes, recording the dosage on the box, storing the medication in a special place, leaving it in plain sight in the living room, or putting the prescription sheet on the refrigerator. [ 49 ] The development of applications has also shown some benefit in this regard. [ 49 ] The use of a polypill regimen, such as combination pill for HIV treatment, as opposed to a multi-pill regimen, also alleviates pill burden and increases adherence. [ 53 ]
The selection of long-acting active ingredients over short-acting ones may also reduce pill burden. For instance, ACE inhibitors are used in the management of hypertension . [ medical citation needed ] Both captopril and lisinopril are examples of ACE inhibitors. However, lisinopril is dosed once a day, whereas captopril may be dosed 2-3 times a day. Assuming that there are no contraindications or potential for drug interactions, using lisinopril instead of captopril may be an appropriate way to limit pill burden. [ medical citation needed ]
The most common intervention to help people who are struggling with polypharmacy is deprescribing (reducing the number of medications prescribed, and identifying and discontinuing medications which now do more harm than good [ 58 ] ) [ 59 ] Deprescribing is distinct from medication simplification, reducing the number of dose forms and administration times. [ 60 ] This can commonly be done as an elderly patient becomes more frail and treatment needs to shift from preventative to palliative . [ 58 ] Deprescribing is feasible and effective in many settings including residential care, communities and hospitals. [ 59 ] Deprescribing should be considered when: (1) a new symptom or adverse event arises, (2) the person develops an end-stage disease, (3) the combination of drugs is risky, or (4) stopping the drug does not alter the disease trajectory. [ 10 ]
Several tools exist to help physicians decide when to deprescribe and what medications can be added to a pharmaceutical regimen. The Beers Criteria and the STOPP/START criteria help identify medications that have the highest risk of adverse drug events (ADE) and drug-drug interactions . [ 61 ] [ 62 ] [ 63 ] The Medication Appropriateness Tool for Comorbid Health conditions during Dementia As of 2016 [update] (MATCH-D) is the only tool available specifically for people with dementia, and also cautions against polypharmacy and complex medication regimens. [ 64 ] [ 65 ]
Barriers faced by both physicians and patients have made it challenging to apply deprescribing strategies. [ 66 ] For physicians, these include fear of consequences of deprescribing, the prescriber's own confidence in their skills and knowledge to deprescribe, reluctance to alter medications that are prescribed by other practitioners, the feasibility of deprescribing, lack of access to all of a patient's clinical notes, and the complexity of having multiple providers. [ 66 ] [ 67 ] [ 68 ] For patients who are prescribed or require the medication, barriers include attitudes or beliefs about the medications, inability to communicate with physicians, fears and uncertainties surrounding deprescribing, and influence of physicians, family, and the media. [ 66 ] Barriers can include other health professionals or carers, such as in residential care, believing that the medicines are required. [ 69 ]
In people with multiple long-term conditions (multimorbidity) and polypharmacy deprescribing represents a complex challenge as clinical guidelines are usually developed for single conditions. In these cases tools and guidelines like the Beers Criteria and STOPP/START could be used safely by clinicians but not all patients might benefit from stopping their medication. There is a need for clarity about how much clinicians can do beyond the guidelines and the responsibility they need to take could help them prescribing and deprescribing for complex cases. Further factors that can help clinicians tailor their decisions to the individual are: access to detailed data on the people in their care (including their backgrounds and personal medical goals), discussing plans to stop a medicine already when it is first prescribed, and a good relationship that involves mutual trust and regular discussions on progress. Furthermore, longer appointments for prescribing and deprescribing would allow time explain the process of deprescribing, explore related concerns, and support making the right decisions. [ 70 ] [ 71 ]
The effectiveness of specific interventions to improve the appropriate use of polypharmacy such as pharmaceutical care and computerised decision support is unclear. [ 10 ] This is due to low quality of current evidence surrounding these interventions. [ 10 ] High quality evidence is needed to make any conclusions about the effects of such interventions in any environment, including in care homes. [ 72 ] Deprescribing is not influenced by whether medicines are prescribed through a paper-based or an electronic system. [ 73 ] Deprescribing rounds has been proposed as a potentially successful methodology in reducing polypharmacy. [ 74 ] Sharing of positive outcomes from physicians who have implemented deprescribing, increased communication between all practitioners involved in patient care, higher compensation for time spent deprescribing, and clear deprescribing guidelines can help enable the practice of deprescribing. [ 68 ] Despite the difficulties, a recent blinded study of deprescribing reported that participants used an average of two fewer medicines each after 12 months showing again that deprescribing is feasible. [ 75 ] | https://en.wikipedia.org/wiki/Polypharmacy |
A polyphenic trait is a trait for which multiple, discrete phenotypes can arise from a single genotype as a result of differing environmental conditions. It is therefore a special case of phenotypic plasticity .
There are several types of polyphenism in animals, from having sex determined by the environment to the castes of honey bees and other social insects. Some polyphenisms are seasonal, as in some butterflies which have different patterns during the year, and some Arctic animals like the snowshoe hare and Arctic fox , which are white in winter. Other animals have predator-induced or resource polyphenisms, allowing them to exploit variations in their environment. Some nematode worms can develop either into adults or into resting dauer larvae according to resource availability.
A polyphenism is the occurrence of several phenotypes in a population, the differences between which are not the result of genetic differences. [ 2 ] For example, crocodiles possess a temperature-dependent sex determining polyphenism , where sex is the trait influenced by variations in nest temperature. [ 3 ]
When polyphenic forms exist at the same time in the same panmictic (interbreeding) population they can be compared to genetic polymorphism . [ 4 ] With polyphenism, the switch between morphs is environmental, but with genetic polymorphism the determination of morph is genetic. These two cases have in common that more than one morph is part of the population at any one time. This is rather different from cases where one morph predictably follows another during, for instance, the course of a year. In essence the latter is normal ontogeny where young forms can and do have different forms, colours and habits to adults.
The discrete nature of polyphenic traits differentiates them from traits like weight and height, which are also dependent on environmental conditions but vary continuously across a spectrum. When a polyphenism is present, an environmental cue causes the organism to develop along a separate pathway, resulting in distinct morphologies; thus, the response to the environmental cue is “all or nothing.” The nature of these environmental conditions varies greatly, and includes seasonal cues like temperature and moisture, pheromonal cues, kairomonal cues (signals released from one species that can be recognized by another), and nutritional cues.
Sex-determining polyphenisms allow a species to benefit from sexual reproduction while permitting an unequal gender ratio. This can be beneficial to a species because a large female-to-male ratio maximizes reproductive capacity. However, temperature-dependent sex determination (as seen in crocodiles) limits the range in which a species can exist, and makes the species susceptible to endangerment by changes in weather pattern. [ 3 ] Temperature-dependent sex determination has been proposed as an explanation for the extinction of the dinosaurs. [ 5 ]
Population-dependent and reversible sex determination, found in animals such as the blue wrasse fish, have less potential for failure. In the blue wrasse, only one male is found in a given territory: larvae within the territory develop into females, and adult males will not enter the same territory. If a male dies, one of the females in his territory becomes male, replacing him. [ 5 ] While this system ensures that there will always be a mating couple when two animals of the same species are present, it could potentially decrease genetic variance in a population, for example if the females remain in a single male's territory.
The caste system of insects enables eusociality , the division of labor between non-breeding and breeding individuals. A series of polyphenisms determines whether larvae develop into queens, workers, and, in some cases soldiers. In the case of the ant , P. morrisi , an embryo must develop under certain temperature and photoperiod conditions in order to become a reproductively-active queen. [ 6 ] This allows for control of the mating season but, like sex determination, limits the spread of the species into certain climates .
In bees, royal jelly provided by worker bees causes a developing larva to become a queen . Royal jelly is only produced when the queen is aging or has died. This system is less subject to influence by environmental conditions, yet prevents unnecessary production of queens.
Polyphenic pigmentation is adaptive for insect species that undergo multiple mating seasons each year. Different pigmentation patterns provide appropriate camouflage throughout the seasons, as well as alter heat retention as temperatures change. [ 7 ] Because insects cease growth and development after eclosion , their pigment pattern is invariable in adulthood: thus, a polyphenic pigment adaptation would be less valuable for species whose adult form survives longer than one year. [ 5 ]
Birds and mammals are capable of continued physiological changes in adulthood, and some display reversible seasonal polyphenisms, such as in the Arctic fox , which becomes all white in winter as snow camouflage . [ 5 ]
Predator-induced polyphenisms allow the species to develop in a more reproductively-successful way in a predator 's absence, but to otherwise assume a more defensible morphology. However, this can fail if the predator evolves to stop producing the kairomone to which the prey responds. For example, the midge larvae ( Chaoborus ) that feed on Daphnia cucullata (a water flea ) release a kairomone that Daphnia can detect. When the midge larvae are present, Daphnia grow large helmets that protect them from being eaten. However, when the predator is absent, Daphnia have smaller heads and are therefore more agile swimmers. [ 5 ]
Organisms with resource polyphenisms show alternative phenotypes that allow differential use of food or other resources. One example is the western spadefoot toad , which maximizes its reproductive capacity in temporary desert ponds. While the water is at a safe level, the tadpoles develop slowly on a diet of other opportunistic pond inhabitants. However, when the water level is low and desiccation is imminent, the tadpoles develop a morphology (wide mouth, strong jaw) that permits them to cannibalize. Cannibalistic tadpoles receive better nutrition and thus metamorphose more quickly, avoiding death as the pond dries up. [ 8 ]
Among invertebrates , the nematode Pristionchus pacificus has one morph that primarily feeds on bacteria and a second morph that produces large teeth, enabling it to feed on other nematodes, including competitors for bacterial food. In this species, cues of starvation and crowding by other nematodes, as sensed by pheromones, trigger a hormonal signal that ultimately activates a developmental switch gene that specifies formation of the predatory morph. [ 9 ]
Density-dependent polyphenism allows species to show a different phenotype based on the population density in which it was reared. In Lepidoptera , African armyworm larvae exhibit one of two appearances: the gregarious or solitary phase. Under crowded or "gregarious" conditions, the larvae have black bodies and yellow stripes along their bodies. However, under solitary conditions, they have green bodies with a brown stripe down their backs. The different phenotypes emerge during the third instar and remain until the last instar. [ 10 ]
Under conditions of stress such as crowding and high temperature, L2 larvae of some free living nematodes such as Caenorhabditis elegans can switch development to the so-called dauer larva state, instead of going the normal molts into a reproductive adult. These dauer larvae are a stress-resistant, non-feeding, long-lived stage, enabling the animals to survive harsh conditions. On return to favorable conditions, the animal resumes reproductive development from L3 stage onwards.
A mechanism has been proposed for the evolutionary development of polyphenisms: [ 7 ]
Evolution of novel polyphenisms through this mechanism has been demonstrated in the laboratory. Suzuki and Nijhout used an existing mutation ( black ) in a monophenic green hornworm ( Manduca sexta ) that causes a black phenotype. They found that if larvae from an existing population of black mutants were raised at 20˚C, then all the final instar larvae were black; but if the larvae were instead raised at 28˚C, the final instar larvae ranged in color from black to green. By selecting for larvae that were black if raised at 20˚C but green if raised at 28˚C, they produced a polyphenic strain after thirteen generations. [ 11 ]
This fits the model described above because a new mutation (black) was required to reveal pre-existing genetic variation and to permit selection. Furthermore, the production of a polyphenic strain was only possible because of background variation within the species: two alleles, one temperature-sensitive and one stable, were present for a single gene upstream of black (in the pigment production pathway) before selection occurred. The temperature-sensitive allele was not observable because at high temperatures, it caused an increase in green pigment in hornworms that were already bright green. However, introduction of the black mutant caused the temperature-dependent changes in pigment production to become obvious. The researchers could then select for larvae with the temperature-sensitive allele , resulting in a polyphenism. [ citation needed ] | https://en.wikipedia.org/wiki/Polyphenism |
Polyphenylene sulfide (PPS) is an organic polymer consisting of aromatic rings linked by sulfides . Synthetic fiber and textiles derived from this polymer resist chemical and thermal attack. PPS is used in filter fabric for coal boilers , papermaking felts , electrical insulation , film capacitors , specialty membranes , gaskets , and packings . PPS is the precursor to a conductive polymer of the semi-flexible rod polymer family. The PPS, which is otherwise insulating, can be converted to the semiconducting form by oxidation or use of dopants . [ 2 ]
Polyphenylene sulfide is an engineering plastic , commonly used today as a high-performance thermoplastic . [ 3 ] PPS can be molded, extruded, or machined to tight tolerances. In its pure solid form, it may be opaque white to light tan in color. Maximum service temperature is 218 °C (424 °F). PPS has not been found to dissolve in any solvent at temperatures below approximately 200 °C (392 °F). [ citation needed ]
An easy way to identify the compound is by the metallic sound it makes when struck.
PPS is marketed by different brand names by different manufacturers. The major industry players are China Lumena New Materials , Solvay , Kureha , HDC Polyall , Celanese , DIC Corporation , Toray Industries , Zhejiang NHU Special Materials, SABIC , and Tosoh . [ 4 ] Other manufacturers include Chengdu Letian Plastics, Lion Idemitsu Composites, and Initz (a joint venture of SK Chemicals and Teijin ). [ 5 ]
The following are examples of brand names by manufacturer and PPS type:
PPS is one of the most important high temperature thermoplastic polymers because it exhibits a number of desirable properties. These properties include resistance to heat, acids , alkalies , mildew , bleaches , aging , sunlight , and abrasion . It absorbs only small amounts of solvents and resists dyeing .
The Federal Trade Commission definition for sulfur fiber is "A manufactured fiber in which the fiber-forming substance is a long chain synthetic polysulfide in which at least 85% of the sulfide (–S–) linkages are attached directly to two (2) aromatic rings." The generic name for this synthetic fiber is Sulfar. [ 6 ]
The PPS (polyphenylene sulfide) polymer is formed by reaction of sodium sulfide with 1,4-dichlorobenzene :
The process for commercially producing this material was initially developed by Dr. H. Wayne Hill Jr. and James T. Edmonds at Phillips Petroleum . [ 7 ] N-Methyl-2-pyrrolidone (NMP) is used as the reaction solvent because it is stable at the high temperatures required for the synthesis and it dissolves both the sulfiding agent and the oligomeric intermediates.
Linear, high-molecular-weight PPS that is capable of being extruded into film or melt spun into fiber was invented by Robert W. Campbell. [ 8 ]
The first U.S. commercial sulfur fiber was produced in 1983 by Phillips Fibers Corporation, a subsidiary of Phillips 66 . [ 2 ] | https://en.wikipedia.org/wiki/Polyphenylene_sulfide |
A polyphosphate is a salt or ester of polymeric oxyanions formed from tetrahedral PO 4 ( phosphate ) structural units linked together by sharing oxygen atoms. Polyphosphates can adopt linear or a cyclic (also called, ring) structures. In biology, the polyphosphate esters ADP and ATP are involved in energy storage. A variety of polyphosphates find application in mineral sequestration in municipal waters, generally being present at 1 to 5 ppm. [ 1 ] GTP , CTP , and UTP are also nucleotides important in the protein synthesis, lipid synthesis, and carbohydrate metabolism, respectively.
Polyphosphates are also used as food additives , marked E452 .
The structure of tripolyphosphoric acid illustrates the principles which define the structures of polyphosphates. It consists of three tetrahedral PO 4 units linked together by sharing oxygen centres. For the linear chains, the end phosphorus groups share one oxide and the others phosphorus centres share two oxide centres. The corresponding phosphates are related to the acids by loss of the acidic protons. In the case of the cyclic trimer each tetrahedron shares two vertices with adjacent tetrahedra.
Sharing of three corners is possible. This motif represents crosslinking of the linear polymer. Crosslinked polyphosphates adopt the sheet-structure Phyllosilicates , but such structures occur only under extreme conditions.
Polyphosphates arise by polymerization of phosphoric acid derivatives. The process begins with two phosphate units coming together in a condensation reaction.
The condensation is shown as an equilibrium because the reverse reaction, hydrolysis , is also possible. The process may continue in steps; at each step another (PO 3 ) − unit is added to the chain, as indicated by the part in brackets in the illustration of polyphosphoric acid. P 4 O 10 can be seen as the end product of condensation reactions, where each tetrahedron shares three corners with the others. Conversely, a complex mix of polymers is produced when a small amount of water is added to phosphorus pentoxide.
Polyphosphates are weak bases . A lone pair of electrons on an oxygen atom can be donated to a hydrogen ion (proton) or a metal ion in a typical Lewis acid - Lewis base interaction. This has profound significance in biology. For instance, adenosine triphosphate is about 25% protonated in aqueous solution at pH 7. [ 2 ]
Further protonation occurs at lower pH values.
ATP forms chelate complexes with metal ions. The stability constant for the equilibrium
is particularly large. [ 3 ] The formation of the magnesium complex is a critical element in the process of ATP hydrolysis, as it weakens the link between the terminal phosphate group and the rest of the molecule. [ 2 ] [ 4 ]
The energy released in ATP hydrolysis,
at ΔG ≈ {\displaystyle \approx } -36.8 kJ mol −1 is large by biological standards. P i stands for inorganic phosphate, which is protonated at biological pH. However, it is not large by inorganic standards. The term "high energy" refers to the fact that it is high relative to the amount of energy released in the organic chemical reactions that can occur in living systems.
High molecular weight polyphosphates are well known. [ 5 ] One derivative is the glassy (i.e., amorphous) Graham's salt . Crystalline high molecular weight polyphosphates include Kurrol’s salt and Maddrell’s salt (white powder practically insoluble in water). These species have the formula [NaPO 3 ] n [NaPO 3 (OH)] 2 where n can be as great as 2000. In terms of their structures, these polymers consist of PO 3 − "monomers", with the chains are terminated by protonated phosphates. [ 6 ]
High-polymeric inorganic polyphosphates were found in living organisms by L. Liberman in 1890. These compounds are linear polymers containing a few to several hundred residues of orthophosphate linked by energy-rich phosphoanhydride bonds.
Previously, it was considered either as “ molecular fossil ” or as only a phosphorus and energy source providing the survival of microorganisms under extreme conditions. These compounds are now known to also have regulatory roles, and to occur in representatives of all kingdoms of living organisms, participating in metabolic correction and control on both genetic and enzymatic levels. Polyphosphate is directly involved in the switching-over of the genetic program characteristic of the exponential growth stage of bacteria to the program of cell survival under stationary conditions, "a life in the slow lane". They participate in many regulatory mechanisms occurring in bacteria:
In humans polyphosphates are shown to play a key role in blood coagulation . Produced and released by platelets [ 7 ] they activate blood coagulation factor XII which is essential for blood clot formation. Factor XII, also called Hageman factor, initiates fibrin formation and the generation of a proinflammatory mediator, bradykinin , that contributes to leakage from the blood vessels and thrombosis. [ 8 ] [ 9 ] Bacterial-derived polyphosphates impair the host immune response during infection and targeting polyphosphates with recombinant exopolyphosphatase improves sepsis survival in mice. [ 10 ] Inorganic polyphosphates play a crucial role in tolerance of yeast cells to toxic heavy metal cations. [ 11 ]
Sodium polyphosphate (E452(i)), potassium polyphosphate (E452(ii)), sodium calcium polyphosphate (E452(iii)) and calcium polyphosphate (E452(iv)) are used as food additives (emulsifiers, humectants, sequestrants, stabilisers, and thickeners). [ 12 ] They are not known to pose any potential health risk other than those generally attributed to other phosphate sources (including those naturally occurring in food). While concerns have been raised regarding detrimental effects on the bones and cardiovascular diseases, as well as hyperphosphatemia , these seem to be relevant only for exaggerated consumption of phosphate sources. In all, reasonable consumption (up to 40 mg phosphate per kg of body weight per day) seems to pose no health risk. [ 13 ] [ 14 ] | https://en.wikipedia.org/wiki/Polyphosphate |
Polyphosphate-accumulating organisms (PAOs) are a group of microorganisms that, under certain conditions, facilitate the removal of large amounts of phosphorus from their environments. The most studied example of this phenomenon is in polyphosphate-accumulating bacteria (PAB) found in a type of wastewater processing known as enhanced biological phosphorus removal (EBPR), however phosphate hyperaccumulation has been found to occur in other conditions such as soil and marine environments, as well as in non-bacterial organisms such as fungi and algae. [ 1 ] PAOs accomplish this removal of phosphate by accumulating it within their cells as polyphosphate . PAOs are by no means the only microbes that can accumulate phosphate within their cells and in fact, the production of polyphosphate is a widespread ability among microbes. However, PAOs have many characteristics that other organisms that accumulate polyphosphate do not have that make them amenable to use in wastewater treatment . Specifically, in the case of classical PAOs, is the ability to consume simple carbon compounds (energy source) without the presence of an external electron acceptor (such as nitrate or oxygen) by generating energy from internally stored polyphosphate and glycogen. Many bacteria cannot consume carbon without an energetically favorable electron acceptor and therefore PAOs gain a selective advantage within the mixed microbial community present in the activated sludge. [ 2 ] Therefore, wastewater treatment plants that operate for enhanced biological phosphorus removal have an anaerobic tank (where there is no nitrate or oxygen present as external electron acceptor) prior to the other tanks to give PAOs preferential access to the simple carbon compounds in the wastewater that is influent to the plant.
The classical or "canonical" behavior of PAOs is considered to be the release of phosphate (as orthophosphate ) to the environment and transformation of intracellular polyphosphate reserves into polyhydroxyalkanoates (PHA) from volatile fatty acids (VFAs) and glycogen during anoxic conditions. [ 3 ] This is followed by the consumption of the PHA/VFAs and uptake of environmental orthophosphate during oxic conditions to regenerate polyphosphate reserves within the cell. [ 3 ]
Some PAOs have been found to have alternative methods to accumulating polyphosphate, particularly to do with not storing PHA or glycogen. [ 4 ] [ 5 ] This is generally believed to be seen more often in extracellular environments high in organic compounds, thus containing fermentable substrates like amino acids and sugars. [ 6 ] However, the exact mechanisms of these microbes to accumulate and use polyphosphate are not well understood. [ 5 ]
Candidatus Phosphoribacter is a bacterial genus that has been found to be the dominant PAO associated with wastewater treatment worldwide, and has been found to often participate more in the biological removal of phosphorus than Candidatus Accumulibacter, contrary to previous understandings. [ 4 ] [ 7 ] [ 5 ] This bacteria has been found to be a non-canonical (or fermentative/"fPAO") PAO, and universally lack the genetic potential to store PHA. [ 4 ] [ 8 ] This genus was largely found to be capable of producing the fermentation products acetate, lactate, alanine, and succinate. [ 8 ] [ 9 ] Additionally, it is suggested that the amino acids lysine, arginine, histidine, leucine, isoleucine, valine and phenylalanine may replace the canonical purpose of PHA as an energy substrate during oxic conditions, based on genomic potential and similarity to behavior of other microbial metabolisms. [ 4 ] Alternatively, the compound cyanophycin may used as an energy substrate due to the ubiquity of cyanophycin-metabolizing enzymes encoded in the species. [ 4 ]
Candidatus Accumulibacter phosphatis is one of the most well-studied PAOs, and is responsible for the development of the classical PAO metabolic model which Ca. Phosphoribacter later contradicted. [ 10 ] Formerly considered the most important PAO in waste treatment, the bacteria is highly abundant in wastewater treatment plants globally. [ 5 ] [ 11 ] It can consume a range of carbon compounds, such as acetate and propionate, under anaerobic conditions and store these compounds as polyhydroxyalkanoates (PHA) which it consumes as a carbon and energy source for growth using oxygen or nitrate as electron acceptor. Historically, the hyperaccumulation of phosphate by Ca . Accumulibacter was seen as a stress response, but currently it is suggested that this behavior may play an ecological role. [ 12 ] In combination with Ca. Phosphoribacter , these two PAOs are considered to account for 24-70% of phosphorus removed from wastewater during treatment processing. [ 7 ]
Candidatus Dechloromonas species phosphoritropha and phosphorivorans are PAOs with classical metabolism genotype. [ 13 ] Dechloromonas has been found in high abundances in wastewater treatment plants across the world. [ 14 ] [ 15 ] [ 16 ] [ 5 ] The two species described here, Dechloromonas phosphoritropha and phosphorivans, are the two most abundant species in waste treatment within the genus. [ 17 ]
Candidatus accumulimonas is a species of PAO with classical metabolism phenotype. [ 18 ] [ 19 ]
Microlunatis phosphovorus is a species of PAO with likely non-canonical PAO metabolism, however exact mechanisms have not been determined. [ 20 ] [ 21 ] [ 22 ] Belonging to the same phylum as Ca. phosphoribacter , these two actinobacterial organisms exhibit similar metabolisms, however M. phosphovorus has been suggested to hyperaccumulate over ten times the amount of polyphosphate per cell mass dry weight compared to Ca. phosphoribacter or proteobacterial PAOs. [ 21 ]
Some unnamed species of the Pseudomonas genus have been observed to exhibit PAO phenotypes. [ 23 ]
Paracoccus denitrificans has been observed to exhibit a non-canonical PAO phenotype. [ 23 ] [ 24 ]
Quatrionicoccus australiensis is a bacteria isolated from activated sludge which has been found to accumulate polyphosphate and PHA, thus likely having a classical PAO phenotype. [ 25 ] [ 1 ]
Malikia granosa is a bacteria isolated from activated sludge which has been found to accumulate polyphosphate and PHA, thus likely having a classical PAO phenotype. [ 26 ]
Lampropedia species, isolated from EBPR activated sludge, have been found to accumulate polyphosphate and PHA, though not to extreme degrees. [ 27 ]
Candidatus Microthrix , identified in more than one EBPR activated sludge source, is a filamentous bacteria suspected to be responsible for phosphate removal during the bulking phase of EBPR, where other PAOs decrease in abundance. [ 28 ]
Gemmatimonas aurantiaca is a bacteria isolated from activated sludge that has been observed to accumulate polyphosphate granules. [ 29 ] | https://en.wikipedia.org/wiki/Polyphosphate-accumulating_organisms |
A polyphyletic group is an assemblage that includes organisms with mixed evolutionary origin but does not include their most recent common ancestor. [ 1 ] The term is often applied to groups that share similar features known as homoplasies , which are explained as a result of convergent evolution . The arrangement of the members of a polyphyletic group is called a polyphyly / ˈ p ɒ l ɪ ˌ f aɪ l i / . [ 2 ] It is contrasted with monophyly and paraphyly .
For example, the biological characteristic of warm-bloodedness evolved separately in the ancestors of mammals and the ancestors of birds; "warm-blooded animals" is therefore a polyphyletic grouping. [ 3 ] Other examples of polyphyletic groups are algae , C4 photosynthetic plants , [ 4 ] and edentates . [ 5 ]
Many taxonomists aim to avoid homoplasies in grouping taxa together, with a goal to identify and eliminate groups that are found to be polyphyletic. This is often the stimulus for major revisions of the classification schemes. Researchers concerned more with ecology than with systematics may take polyphyletic groups as legitimate subject matter; the similarities in activity within the fungus group Alternaria , for example, can lead researchers to regard the group as a valid genus while acknowledging its polyphyly. [ 6 ] In recent research, the concepts of monophyly, paraphyly, and polyphyly have been used in deducing key genes for barcoding of diverse groups of species. [ 7 ]
The term polyphyly , or polyphyletic , derives from the two Ancient Greek words πολύς ( polús ) 'many, a lot of', and φῦλον ( phûlon ) 'genus, species', [ 8 ] [ 9 ] and refers to the fact that a polyphyletic group includes organisms (e.g., genera, species) arising from multiple ancestral sources.
Conversely, the term monophyly , or monophyletic , employs the ancient Greek adjective μόνος ( mónos ) 'alone, only, unique', [ 8 ] [ 9 ] and refers to the fact that a monophyletic group includes organisms consisting of all the descendants of a unique common ancestor.
By comparison, the term paraphyly , or paraphyletic , uses the ancient Greek preposition παρά ( pará ) 'beside, near', [ 8 ] [ 9 ] and refers to the situation in which one or several monophyletic subgroups are left apart from all other descendants of a unique common ancestor.
In many schools of taxonomy , the recognition of polyphyletic groups in a classification is discouraged. Monophyletic groups (that is, clades ) are considered by these schools of thought to be the only valid groupings of organisms because they are diagnosed ("defined", in common parlance) on the basis of synapomorphies , while paraphyletic or polyphyletic groups are not. From the perspective of ancestry, clades are simple to define in purely phylogenetic terms without reference to clades previously introduced: a node-based clade definition , for example, could be "All descendants of the last common ancestor of species X and Y". On the other hand, polyphyletic groups can be delimited as a conjunction of several clades, for example "the flying vertebrates consist of the bat, bird, and pterosaur clades".
From a practical perspective, grouping species monophyletically facilitates prediction far more than does polyphyletic grouping. For example, classifying a newly discovered grass in the monophyletic family Poaceae , the true grasses, immediately results in numerous predictions about its structure and its developmental and reproductive characteristics, that are synapomorphies of this family. In contrast, Linnaeus' assignment of plants with two stamens to the polyphyletic class Diandria, while practical for identification, turns out to be useless for prediction, since the presence of exactly two stamens has developed convergently in many groups. [ 10 ]
Species have a special status in systematics as being an observable feature of nature itself and as the basic unit of classification. [ 11 ] It is usually implicitly assumed that species are monophyletic (or at least paraphyletic ). However, hybrid speciation arguably leads to polyphyletic species. [ 12 ] Hybrid species are a common phenomenon in nature, particularly in plants where polyploidy allows for rapid speciation. [ 13 ] Some cladist authors do not consider species to possess the property of "-phyly", which they assert applies only to groups of species. [ 14 ] [ 15 ] | https://en.wikipedia.org/wiki/Polyphyly |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.