text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
The Platinum-Cobalt Scale ( Pt/Co scale or Apha-Hazen Scale ) is a color scale that was introduced in 1892 by chemist Allen Hazen (1869–1930). The index was developed as a way to evaluate pollution levels in waste water. It has since expanded to a common method of comparison of the intensity of yellow-tinted samples. It is specific to the color yellow and is based on dilutions of a 500 ppm platinum cobalt solution. The colour produced by one milligram of platinum cobalt dissolved in one liter of water is fixed as one unit of colour in platinum-cobalt scale. The ASTM has detailed description and procedures in ASTM Designation D1209, "Standard Test Method for Color of Clear Liquids (Platinum-Cobalt Scale)". [ 1 ] [ 2 ] [ 3 ]
Colour may be reported on a water quality report using this scale. [ 4 ]
This hydrology article is a stub . You can help Wikipedia by expanding it .
This chemistry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pt/Co_scale |
Platinum(IV) bromide is the inorganic compound with the formula PtBr 4 . It is a brown solid. It is a little-used compound mainly of interest for academic research. [ 2 ] It is a component of a reagent used in qualitative inorganic analysis . [ 3 ]
In terms of structure, the compound is an inorganic polymer consisting of interconnected PtBr 6 octahedra.
This inorganic compound –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/PtBr4 |
Platinum(II) chloride describes the inorganic compounds with the formula Pt Cl 2 . They are precursor used in the preparation of other platinum compounds. Platinum(II) chloride exists in two crystalline forms ( polymorphs ), but the main properties are somewhat similar: dark brown, insoluble in water, diamagnetic , and odorless.
The structures of PtCl 2 and PdCl 2 are similar. These dichlorides exist in both polymeric, or "α", and hexameric, or "β" structures. The β form converts to the α form at 500 °C. In the β form, the Pt-Pt distances are 3.32–3.40 Å, indicative of some bonding between the pairs of metals. In both forms of PtCl 2 , each Pt center is four-coordinate, being surrounded by four chloride ligands . Complementarily, each Cl center is two-coordinate, being connected to two platinum atoms. [ 2 ] The structure of α-PtCl 2 is reported to be disordered and contain edge- and/or corner-sharing square-planar PtCl 4 units. [ 3 ]
β-PtCl 2 is prepared by heating chloroplatinic acid to 350 °C in air. [ 4 ]
This method is convenient since the chloroplatinic acid is generated readily from Pt metal. Aqueous solutions of H 2 PtCl 6 can also be reduced with hydrazinium salts, but this method is more laborious than the thermal route of Kerr and Schweizer.
Although PtCl 2 can form when platinum metal contacts hot chlorine gas, this process suffers from over-chlorination to give PtCl 4 . Berzelius and later Wöhler and Streicher showed that upon heating to 450 °C, this Pt(IV) compound decomposes to the Pt(II) derivative: [ 5 ] [ 6 ]
Transformations such as this are "driven" by entropy , the free energy gained upon the release of a gaseous product from a solid. Upon heating to still higher temperatures, PtCl 2 releases more chlorine to give metallic Pt. This conversion is the basis of the gravimetric assay of the purity of the PtCl 2 product.
Most reactions of PtCl 2 proceed via treatment with ligands (L) to give molecular derivatives. These transformations entail depolymerization via cleavage of Pt-Cl-Pt linkages:
Addition of ammonia gives initially "PtCl 2 (NH 3 ) 2 ", " Magnus's green salt ", also described as [Pt(NH 3 ) 4 ][PtCl 4 ].
Many complexes have been described, the following are illustrative: [ 7 ]
Several of these compounds are of interest in homogeneous catalysis in the service of organic synthesis or as anti-cancer drugs. | https://en.wikipedia.org/wiki/PtCl2 |
Bis(triphenylphosphine)platinum chloride is a metal phosphine complex with the formula PtCl 2 [P(C 6 H 5 ) 3 ] 2 . Cis- and trans isomers are known. The cis isomer is a white crystalline powder, while the trans isomer is yellow. [ 3 ] Both isomers are square planar about the central platinum atom. The cis isomer is used primarily as a reagent for the synthesis of other platinum compounds.
The cis isomer is the prepared by heating solutions of platinum(II) chlorides with triphenylphosphine . For example, starting from potassium tetrachloroplatinate :
The trans isomer is the prepared by treating potassium trichloro(ethylene)platinate(II) ( Zeise's salt ) with triphenylphosphine : [ 3 ]
With heating or in the presence of excess PPh 3 , the trans isomer converts to the cis complex. The latter complex is the thermodynamic product due to triphenylphosphine being a strong trans effect ligand.
In cis -bis(triphenylphosphine)platinum chloride, the average Pt-P has a bond distance of 2.261 Å and the average Pt-Cl has a bond distance of 2.346 Å. [ 2 ] In trans -bis(triphenylphosphine)platinum chloride, the Pt-P distance is 2.316 Å and the Pt-Cl distance is 2.300 Å. [ 1 ]
The complex also undergoes photoisomerization . | https://en.wikipedia.org/wiki/PtCl2(PPh3)2 |
Platinum(IV) chloride is the inorganic compound of platinum and chlorine with the empirical formula PtCl 4 . This brown solid features platinum in the +4 oxidation state.
Typical of Pt(IV), the metal centers adopt an octahedral coordination geometry , {PtCl 6 }. This geometry is achieved by forming a polymer wherein half of the chloride ligands bridge between the platinum centers. Because of its polymeric structure, PtCl 4 dissolves only upon breaking the chloride bridging ligands . Thus, addition of HCl give H 2 PtCl 6 . Lewis base adducts of Pt(IV) of the type cis-PtCl 4 L 2 are known, but most are prepared by oxidation of the Pt(II) derivatives. [ 2 ]
PtCl 4 is mainly encountered in the handling of chloroplatinic acid , obtained by dissolving of Pt metal in aqua regia . Heating H 2 PtCl 6 to 220 °C gives impure PtCl 4 : [ 3 ]
A purer product can be produced by heating under chlorine gas at 250 °C. [ 4 ]
If excess acids are removed, PtCl 4 crystallizes from aqueous solutions in large red crystals of pentahydrate PtCl 4 ·5(H 2 O), [ 5 ] which can be dehydrated by heating to about 300 °C in a current of dry chlorine. The pentahydrate is stable and is used as the commercial form of PtCl 4 .
Treatment of PtCl 4 with aqueous base gives the [Pt(OH) 6 ] 2− ion. With methyl Grignard reagents followed by partial hydrolysis, PtCl 4 converts to the cuboidal cluster [Pt(CH 3 ) 3 (OH)] 4 . [ 6 ] Upon heating PtCl 4 evolves chlorine to give PtCl 2 :
The heavier halides, PtBr 4 and PtI 4 , are also known. | https://en.wikipedia.org/wiki/PtCl4 |
Platinum tetrafluoride is the inorganic compound with the chemical formula PtF 4 . In the solid state, the compound features platinum(IV) in octahedral coordination geometry . [ 2 ]
The compound was first reported by Henri Moissan by the fluorination of platinum metal in the presence of hydrogen fluoride . [ 3 ] A modern synthesis involves thermal decomposition of platinum hexafluoride . [ 4 ]
Platinum tetrafluoride vapour at 298.15 K consists of individual molecules. The enthalpy of sublimation is 210 kJmol −1 . [ 5 ] Original analysis of powdered PtF 4 suggested a tetrahedral molecular geometry , but later analysis by several methods identified it as octahedral, with four of the six fluorines on each platinum bridging to adjacent platinum centres. [ 6 ]
A solution of platinum tetrafluoride in water is coloured reddish brown, but it rapidly decomposes, releasing heat and forming an orange coloured platinum dioxide hydrate precipitate and fluoroplatinic acid. [ 7 ] When heated to a red hot temperature, platinum tetrafluoride decomposes to platinum metal and fluorine gas. When heated in contact with glass, silicon tetrafluoride gas is produced along with the metal. [ 7 ]
Platinum tetrafluoride can form adducts with selenium tetrafluoride and bromine trifluoride . [ 7 ] Volatile crystalline adducts are also formed in combination with BF 3 , PF 3 , BCl 3 , and PCl 3 . [ 7 ]
The fluoroplatinates are salts containing the PtF 6 2− ion. Fluoroplatinic acid H 2 PtF 6 forms yellow crystals that absorb water from the air. Ammonium, sodium, magnesium, calcium, strontium, and rare earth including lanthanum fluoropalatinate salts are soluble in water. [ 7 ] Potassium, rubidium, caesium, and barium salts are insoluble in water. [ 7 ] | https://en.wikipedia.org/wiki/PtF4 |
Platinum hexafluoride is the chemical compound with the formula Pt F 6 , and is one of seventeen known binary hexafluorides . It is a dark-red volatile solid that forms a red gas. The compound is a unique example of platinum in the +6 oxidation state. With only four d-electrons, it is paramagnetic with a triplet ground state. PtF 6 is a strong fluorinating agent and one of the strongest oxidants, capable of oxidising xenon and O 2 . PtF 6 is octahedral in both the solid state and in the gaseous state. The Pt-F bond lengths are 185 picometers . [ 1 ]
PtF 6 was first prepared by reaction of fluorine with platinum metal. [ 2 ] This route remains the method of choice. [ 1 ]
PtF 6 can also be prepared by disproportionation of the pentafluoride ( PtF 5 ), with the tetrafluoride ( PtF 4 ) as a byproduct. The required PtF 5 can be obtained by fluorinating PtCl 2 :
Platinum hexafluoride can gain an electron to form the hexafluoroplatinate anion, PtF − 6 . It is formed by reacting platinum hexafluoride with relatively uncationisable elements and compounds, for example with xenon to form " XePtF 6 " (actually a mixture of XeFPtF 5 , XeFPt 2 F 11 , and Xe 2 F 3 PtF 6 ), known as xenon hexafluoroplatinate . The discovery of this reaction in 1962 proved that noble gases form chemical compounds. Previous to the experiment with xenon, PtF 6 had been shown to react with oxygen to form [O 2 ] + [PtF 6 ] − , dioxygenyl hexafluoroplatinate . | https://en.wikipedia.org/wiki/PtF6 |
Adams' catalyst , also known as platinum dioxide , is usually represented as platinum (IV) oxide hydrate , PtO 2 •H 2 O. It is a catalyst for hydrogenation and hydrogenolysis in organic synthesis . [ 1 ] This dark brown powder is commercially available. The oxide itself is not an active catalyst, but it becomes active after exposure to hydrogen whereupon it converts to platinum black , which is responsible for reactions.
Adams' catalyst is prepared from chloroplatinic acid H 2 PtCl 6 or ammonium chloroplatinate , (NH 4 ) 2 PtCl 6 , by fusion with sodium nitrate . The first published preparation was reported by V. Voorhees and Roger Adams . [ 2 ] The procedure involves first preparing a platinum nitrate which is then heated to expel nitrogen oxides. [ 3 ]
The resulting brown cake is washed with water to free it from nitrates. The catalyst can either be used as is or dried and stored in a desiccator for later use. Platinum can be recovered from spent catalyst by conversion to ammonium chloroplatinate using aqua regia followed by ammonia .
Adams' catalyst is used for many applications. It has shown to be valuable for hydrogenation , hydrogenolysis , dehydrogenation , and oxidation reactions. During the reaction, platinum metal ( platinum black ) is formed which has been cited to be the active catalyst. [ 4 ] [ 5 ] Hydrogenation occurs with syn stereochemistry when used on an alkyne resulting in a cis-alkene. Some of the most important transformations include the hydrogenation of ketones to alcohols or ethers (the latter product forming in the presence of alcohols and acids) [ 6 ] and the reduction of nitro compounds to amines. [ 7 ] However, reductions of alkenes can be performed with Adams' catalyst in the presence of nitro groups without reducing the nitro group. [ 8 ] When reducing nitro compounds to amines, platinum catalysts are preferred over palladium catalysts to minimize hydrogenolysis. The catalyst is also used for the hydrogenolysis of phenyl phosphate esters, a reaction that does not occur with palladium catalysts. The pH of the solvent significantly affects the reaction course, and reactions of the catalyst are often enhanced by conducting the reduction in neat acetic acid, or solutions of acetic acid in other solvents.
Before development of Adams' catalyst, organic reductions were carried out using colloidal platinum or platinum black. The colloidal catalysts were more active but posed difficulties in isolating reaction products. This led to more widespread use of platinum black. In Adams' own words:
"...Several of the problems I assigned my students involved catalytic reduction. For this purpose we were using as a catalyst platinum black made by the generally accepted best method known at the time. The students had much trouble with the catalyst they obtained in that frequently it proved to be inactive even though prepared by the same detailed procedure which resulted occasionally in an active product. I therefore initiated a research to find conditions for preparing this catalyst with uniform activity." [ 4 ]
Little precaution is necessary with the oxide but, after exposure to H 2 , the resulting platinum black can be pyrophoric . Therefore, it should not be allowed to dry and all exposure to oxygen should be minimized. | https://en.wikipedia.org/wiki/PtO2 |
Platinum diselenide is a transition metal dichalcogenide with the formula PtSe 2 . It is a layered substance that can be split into layers down to three atoms thick. PtSe 2 can behave as a metalloid or as a semiconductor depending on the thickness.
Minozzi was the first to report synthesising platinum diselenide from the elements in 1909. [ 2 ]
Platinum diselenide can be formed by heating thin foils of platinum in selenium vapour at 400 °C. [ 3 ] [ 4 ]
A platinum 111 surface exposed to selenium vapour at 270 °C forms a monolayer of PtSe 2 . [ 5 ]
In addition to these selenization methods, PtSe 2 can be made by precipitation in water solution of Pt(IV) treated with hydrogen selenide , or by heating platinum tetrachloride with elemental selenium. [ 2 ]
Platinum diselenide occurs naturally as the mineral Sudovikovite . It was named after Russian petrologist, N.G. Sudovikov who lived from 1903 to 1966. The mineral's hardness is 2 to 2 1 / 2 . Sudovikovite was found in the Srednyaya Padma mine, Velikaya Guba uranium-vanadium deposit, Zaonezhie peninsula, Karelia Republic , Russia. [ 6 ]
Platinum diselenide forms crystals in the cadmium iodide structure. This means that the substance forms layers. Each of the monolayers has a central bed of platinum atoms, with a sheet of selenium atoms above and below. This structure is also called "1T" and has an trigonal structure. The layers are only weakly bonded together, and it is possible to exfoliate layers to bilayers or monolayers. [ 7 ]
In bulk the material is semi-metallic, but when reduced to few layers it becomes a semiconductor. [ 7 ] [ 8 ] The conductivity of the bulk material is 620,000 S/m. [ 9 ]
The XPS spectrum shows a peak at 72.3 eV from Pt 4f core, and also has peaks from Pt 5p 3/2 [ 7 ] and Se 3d 3/2 and 3d 5/2 at 55.19and 54.39 eV. [ 5 ]
Phonon vibrations are designated by the infrared active A 2u (Se vibrating out of plane opposite to Pt), E u (in layer vibration, Se opposite to Pt), and Raman active A 1g (Se top and bottom atoms moving out of plane in opposite directions 205 cm −1 ), and E g (In plane, top and bottom Se atoms moving opposite 175 cm −1 ). In the Raman spectrum , the A 1g is lessened when stimulated emissions polarised perpendicular to the incoming rays are measured. The E g mode is red-shifted when more layers are stacked. (166 cm −1 for bilayer and 155 cm −1 for bulk material) The A 1g emission only has a slight change when thickness varies. [ 7 ]
The band gap is calculated as 1.2 eV for monolayers, and 0.21 eV for bilayers. For a trylayer or thicker the substance loses a bandgap and becomes semimetallic. [ 5 ]
PtSe 2 can change its conductance in the presence of particular gases, such as nitrogen dioxide . Within a few seconds, NO 2 absorbs on the surface of the PtSe 2 material and lowers the resistance. When the gas is absent, high resistance returns again in about a minute. [ 3 ]
The Seebeck coefficient of PtSe 2 is 40 μV/K. [ 10 ]
Although pristine platinum diselenide is nonmagnetic, the presence of platinum vacancies and strain were predicted to induce magnetism. [ 11 ] Later magneto-transport studies [ 12 ] have indeed shown that defective PtSe 2 exhibits magnetic properties. Due to RKKY interaction between magnetic Pt-vacancies, this results in layer-dependent ferromagnetic or anti-ferromagnetic behavior.
Monolayers of platinum diselenide show helical spin texture, which is not expected for centrosymmetric materials such as this. This property could be due to a local dipole induced Rashba effect . It means that PtSe 2 is a potential spintronics material. [ 13 ]
Water can physisorb to the surface of platinum diselenide with an energy of −0.19 eV, and similarly for oxygen with energy −0.13 eV. Water and oxygen do not react at room temperature, because significant energy would be required to break apart the molecules. [ 9 ]
Palladium diselenide has a different modified pyrite structure. Palladium ditelluride has a similar structure to platinum diselenide. [ 14 ] Platinum disulfide is a semiconductor, and platinum ditelluride is metallic in nature.
More complex substances with platinum and selenium also exist, including the quaternary chalcogenides Rb 2 Pt 3 USe 6 and Cs 2 Pt 3 USe 6 [ 15 ]
Jacutingaite is a ternary platinum selenide HgPtSe 3 . [ 16 ]
Platinum diselenide can be utilized for boardband photodetector up to mid-infrared (MIR) region with stability in ambient condition. [ 17 ] Also it can work as a catalyst, and can be built into field effect transistors . [ 9 ]
Combined with graphene it can be a photocatalyst, converting water and oxygen to reactive hydroxyl radical and superoxide. This reaction works when photons produce holes and electrons. The holes can neutralise hydroxide to make hydroxyl, and the electrons attach to oxygen to make superoxide. These reactive species can mineralise organic matter. | https://en.wikipedia.org/wiki/PtSe2 |
Ptaquiloside is a norsesquiterpene glucoside produced by bracken ferns (majorly Pteridium aquilinum ) during metabolism . It is identified to be the main carcinogen of the ferns and to be responsible for their biological effects, such as haemorrhagic disease and bright blindness in livestock and oesophageal , gastric cancer in humans. Ptaquiloside has an unstable chemical structure and acts as a DNA alkylating agent under physiological conditions. It was first isolated and characterized by Yamada and co-workers in 1983. [ 2 ] [ 3 ]
The pure form ptaquiloside is a colorless amorphous compound. It is readily soluble in water and fairly soluble in ethyl acetate. Except in the plants, ptaquiloside has been detected in the milk and meat of affected livestock, as well as in the underground water and dry soil around bracken fern vegetation. [ 4 ] [ 5 ] [ 6 ] The prevalence of ptaquiloside in daily sources along with its carcinogenic effects make it an increasing biological hazard in modern days.
The presence of ptaquiloside has been detected in a variety of ferns, including the species in the genera Pteridium (bracken), Pteris , Microlepia , and Hypolepis . Pteridium aquilinum (commonly known as bracken fern) is the most common ptaquiloside-containing fern with a wide geographical and ecological distribution. It is present in all continents from subtropic to subarctic areas. Bracken fern is a very adaptable plant and is capable of forming dense, rapidly expanding populations in course of the first phases of the ecological succession in forest cleanings and other disturbed rural areas. Its aggressive growth, characterized by an extensive rhizome system and rapidly growing fronds , sometimes enables it to be a dominant species in certain plant communities. [ 7 ] The ptaquiloside content of bracken varies widely across species and changes with the part of the plant, the plant growing site and the collecting season. According to previous studies, the concentrations of ptaquiloside in bracken varied between 0 and 1% of the dry weight of the plant. [ 8 ] [ 9 ] Generally, ptaquiloside is found to occur in the highest concentrations in the young developing parts of bracken, such as the croziers and unfolding parts during the spring and early summer, while the concentrations of ptaquiloside in the rhizomes are rather low. [ 10 ] However, studies on the concentrations of ptaquiloside in Danish bracken by Rasmussen et al. showed that the concentrations of ptaquiloside in the rhizomes were significantly higher than the previously reported values. [ 11 ]
Ptaquiloside can pass into the milk produced by bracken-fed cows and sheep. In 1996, Alonso-Amelot, Smith and co-workers found that ptaquiloside was excreted in milk at a concentration of 8.6 ± 1.2% of the amount ingested by a cow from bracken, and was linearly dose-dependent. On the basis of their experiments and the assumption that a person drinks 0.5 litres of milk daily, they estimated that this person might ingest about 10 mg of ptaquiloside per day, although only some of that amount will be absorbed. [ 4 ] Ptaquiloside can also leach from the bracken leaves into water and soil. Numerous studies have reported the presence of ptaquiloside in the underground/surface water, and soil near bracken vegetation. [ 5 ] [ 6 ] The degradation speed of ptaquiloside in the soil is affected by the acidity , clay content, carbon content, temperature and presumably microbioactivity. Acidic condition (pH<4) and high temperature (at least 25 °C) facilitate ptaquiloside degradation, while the half-life of ptaquiloside in less acidic sandy soil is reported to be between 150 and 180 hours. [ 12 ]
Main routes that can lead to human exposure to the toxic effects of bracken fern include ingestion of the plant (particularly the croziers and young fronds), inhalation of the airborne spores , consumption of the milk and meat of affected animals, and drinking ptaquiloside contaminated water. [ 13 ]
Ptaquiloside has unstable chemical structure and readily undergoes glucose liberation. The resulting ptaquilo dienone is the active form of ptaquiloside and accounts for the observed biological effects. The cyclopropyl group in the dienone is highly reactive as an electrophile , not only because it is conjugated with the keto group, but because it also constitutes a cyclopropyl carbinol system, from which the facile formation of the stable non-classical cation is well-known.
In acidic conditions, ptaquiloside gradually undergoes aromatization with the elimination of D-glucose to afford ptaquilosin, and finally pterosin B. Under weakly alkaline conditions, ptaquiloside and its aglycone ptaquilosin are converted into an unstable conjugated dienone intermediate. This ptaquilodienone is the activated form of ptaquiloside and is regarded as the ultimate carcinogen of bracken ferns. Due to the constitution of a cyclopropyl carbinol system, ptaquilodienone is a strong electrophile and acts as a powerful alkylating agent that reacts directly with biological nucleophiles including amino acids , nucleosides , and nucleotides under weakly acidic conditions at room temperature (as shown in the scheme below). [ 14 ]
Under physiological conditions, ptaquiloside readily liberates glucose to produce the ptaquilodienone. The alkylation of amino acids with the dienone mostly takes place at the thiol group in cysteine , glutathione and methionine . The alkylation at the carboxylate group of each amino acid, forming the corresponding ester , is also observed to a small extent based on the previously reported literature. The dienone reacts with both adenine (majorly at N-3) and guanine (majorly at N-7) residues of DNA to form the DNA adducts. [ 15 ] The alkylation induces spontaneous depurination and cleavage of DNA at adenine base site. In a model reaction with a deoxytetranucleotide (as shown on the right), a covalent adduct is found at a guanine residue and the N- glycosidic bond breaks to release the adduct. [ 14 ] In 1998, Prakash, Smith and co-workers showed that the alkylation of adenine by ptaquiloside in codon 61 followed by depurination and error in the DNA synthesis resulted in the activation of H- ras proto- oncogene in the ileum of calves fed bracken. [ 16 ]
Bracken is known to have various biological effects, such as carcinogenicity and its well-defined syndromes in livestock and laboratory animals. Ptaquiloside is proved to be responsible for several of these biological effects, some of which are species specific. [ 10 ]
Cattle that consume bracken ferns develop acute bracken poisoning and chronic bovine enzootic haematuria (BEH). The main feature of acute bracken poisoning in cattle is the depression of bone marrow activity, which gives rise to severe leucopenia (particularly of the granulocytes), thrombocytopenia , and acute haemorrhagic crisis. [ 17 ] However, most of the researchers believe ptaquiloside is not the direct causing agent of the acute bracken fern poisoning. The main feature of haematuria is urinary bladder tumors and haematuria in cattle after prolonged exposure to bracken. Based on the extensive studies, a positive correlation is shown between the ptquiloside concentration and the incidence of BEH. [ 18 ] [ 8 ] [ 19 ]
Sheep fed by a diet containing bracken develop acute haemorrhagic disease and bright blindness. [ 20 ] The main features of the blindness include progressive retinal atrophy and stenosis of the blood vessels. [ 21 ] In 1993, Yamada group proved ptaquiloside was the compound causing retinal degeneration. [ 22 ]
Rats that were given a diet containing ptaquiloside for a prolonged period developed tumors in both the ileum and urinary bladder . Prakash, Smith and co-workers showed that ptaquiloside-induced carcinogenesis was initiated by the activation of the H- ras oncogene. [ 16 ] Other non-ruminants such as pig, rabbit, and guinea pig, also develop syndromes after ingestion of ptaquiloside, which include haematuria, tumors and organ abnormities (see the diagram). [ 10 ]
Bracken fern increases the oncogenic risk in humans. Epidemiological survey revealed that bracken fern consumption was positively correlated with esophageal cancer and with gastric cancer in many geographical areas of the world. [ 23 ] In 1989, Natori and co-workers showed that ptaquiloside had clastogenic effect and caused chromosomal aberration in mammalian cells. [ 24 ] In 2003, Santos group reported significantly increased levels of chromosomal abnormalities , such as chromatid breaks in cultured peripheral lymphocytes . [ 25 ]
The use of bracken fern as human food is mainly a historical question. The rhizomes of these plants served as human food in Scotland during the First World War. In America (USA, Canada), Russia, China and Japan, fern is grown commercially for human use. The usual procedure that is performed before eating the plant is to pre-treat the fern with boiling water in the presence of different chemicals, such as sodium bicarbonate and wood ash , to degrade or inactivate ptaquiloside and other toxic agents. Nevertheless, some carcinogenic activity persists even after the treatment. [ 10 ] [ 26 ] As shown by Kamon and Hirayama, the risk of oesophageal cancer was increased approximately by 2.1 in men and 3.7 in women who regularly consume bracken in Japan. [ 27 ] Recent researches have suggested that sulfur-containing amino acids can potentially be used under appropriate conditions as detoxifying agents for ptaquiloside [ 17 ] and selenium supplementation can prevent as well as reverse the immunotoxic effects induced by ptaquiloside. [ 28 ]
Ptaquiloside in the aqueous extract of bracken can be detected using different instrumental methods: thin-layer chromatography –densitometry (TLC-densitometry), high-performance liquid chromatography (HPLC), gas chromatography–mass spectrometry (GCMS), and liquid chromatography–mass spectrometry (LC-MS). The diagnostic tests of ptaquiloside inside cells include gene mutation detection, immunohistochemical detection of tumor biomarkers, chromosomal aberrations, oxidative stress for EBH, PCR , real-time PCR and DNase-SISPA (sequence-independent single primer amplification). [ 26 ]
In 1989 and 1993, Yamada and co-workers reported the first enantioselective total synthesis of both the enantiomers of ptaquilosin, the aglycone of ptaquiloside. [ 29 ] [ 30 ] In the first step, the menthyl ester of cyclopentane-1,2-dicarboxylic acid 1 was partially hydrolyzed to afford the monomenthyl ester, which was later alkylated with methallyl bromide in the presence of HMPA to selectively produce 2 . The product 2 was then converted to the acid chloride and treated with stannic chloride to effect Friedel-Crafts acylation to give enone 3 . Hydride reduction, selective oxidation of the allylic alcohol, and silylation were then performed to provide compound 4 . On treatment with base and a chloroethyl sulfonium salt, a mixture of spiro cyclopropanes was obtained. The minor product 5a can be isomerized with p-toluenesulfonic acid to 5b with 81% yield. Desaturation by selenylation/dehydroselenation and basic peroxide oxidation afforded epoxide 6 . Mild reduction, methyl Grignard addition , and oxidation gave compound 7 . Methylation of the cyclopentanone under Noyori's condition using the TASF enolate produced a mixture of isomers. The undesired isomer 8a can be equilibriumed with potassium tert-butoxide in 81% yield to exclusively generate 8b . [ clarification needed ] Reduction , deprotection , and oxidation afforded 9 . On treatment with oxygen in warm ethyl acetate , the aldehyde on 9 was oxidized to the acyl radical for decarbonylation. Stereoselective trapping of the tertiary radical by oxygen gave the hydroperoxide 10 . Under mild reduction, the naturally occurred (-)-ptaquilosin 11 was obtained. The Yamada's synthesis proceeded in 20 steps with an overall yield of 2.9%. Similarly, the unnatural (+)-enantiomer of ptaquilosin was synthesized from the diastereomer of 2 .
Multiple synthetic studies directed towards ptaquilosin 11 have been reported since 1989. In 1994, Padwa and co-workers described the synthesis of the core skeleton of ptaquilosin by a highly convergent approach. [ 31 ] In 1995, Cossy and co-workers reported novel routes to the racemic and optically active ptaquilosin skeleton. Their properly functionalized tricyclic compound would be of great utility for the synthesis of 11 . [ 32 ] | https://en.wikipedia.org/wiki/Ptaquiloside |
A pteridophyte is a vascular plant (with xylem and phloem ) that reproduces by means of spores . Because pteridophytes produce neither flowers nor seeds , they are sometimes referred to as " cryptogams ", meaning that their means of reproduction is hidden. They are also the ancestors of the plants we see today.
Ferns , horsetails (often treated as ferns), and lycophytes ( clubmosses , spikemosses , and quillworts ) are all pteridophytes. However, they do not form a monophyletic group because ferns (and horsetails) are more closely related to seed plants than to lycophytes. "Pteridophyta" is thus no longer a widely accepted taxon, but the term pteridophyte remains in common parlance, as do pteridology and pteridologist as a science and its practitioner, for example by the International Association of Pteridologists and the Pteridophyte Phylogeny Group .
The name Pteridophyte is a Neo-Latin compound word created by English speakers around 1880. [ 1 ] It is formed from the prefix pterido- meaning fern, a Latin borrowing of the Greek word pterís which derives from pterón meaning feather. [ 2 ] The suffix, -phyte , is a suffix meaning plant from the ancient Greek word phyton (φυτόν). [ 3 ]
Pteridophytes (ferns and lycophytes) are free-sporing vascular plants that have a life cycle with alternating , free-living gametophyte and sporophyte phases that are independent at maturity. The body of the sporophyte is well differentiated into roots, stem and leaves. The root system is always adventitious . The stem is either underground or aerial. The leaves may be microphylls or megaphylls . Their other common characteristics include vascular plant apomorphies (e.g., vascular tissue ) and land plant plesiomorphies (e.g., spore dispersal and the absence of seeds ). [ 4 ] [ 5 ]
Of the pteridophytes, ferns account for nearly 90% of the extant diversity. [ 5 ] Smith et al. (2006), the first higher-level pteridophyte classification published in the molecular phylogenetic era, considered the ferns as monilophytes, as follows: [ 6 ]
where the monilophytes comprise about 9,000 species, including horsetails ( Equisetaceae ), whisk ferns (Psilotaceae), and all eusporangiate and all leptosporangiate ferns. Historically both lycophytes and monilophytes were grouped together as pteridophytes (ferns and fern allies) on the basis of being spore-bearing ("seed-free"). In Smith's molecular phylogenetic study the ferns are characterised by lateral root origin in the endodermis , usually mesarch protoxylem in shoots, a pseudoendospore, plasmodial tapetum , and sperm cells with 30-1000 flagella . [ 6 ] The term "moniliform" as in Moniliformopses and monilophytes means "bead-shaped" and was introduced by Kenrick and Crane (1997) [ 7 ] as a scientific replacement for "fern" (including Equisetaceae) and became established by Pryer et al. (2004). [ 8 ] Christenhusz and Chase (2014) in their review of classification schemes provide a critique of this usage, which they discouraged as irrational. In fact the alternative name Filicopsida was already in use. [ 9 ] By comparison "lycopod" or lycophyte (club moss) means wolf-plant. The term " fern ally " included under Pteridophyta generally refers to vascular spore-bearing plants that are not ferns, including lycopods, horsetails, whisk ferns and water ferns ( Marsileaceae , Salviniaceae and Ceratopteris ). This is not a natural grouping but rather a convenient term for non-fern, and is also discouraged, as is eusporangiate for non-leptosporangiate ferns. [ 10 ]
However both Infradivision and Moniliformopses are also invalid names under the International Code of Botanical Nomenclature . Ferns, despite forming a monophyletic clade , are formally only considered as four classes ( Psilotopsida ; Equisetopsida ; Marattiopsida ; Polypodiopsida ), 11 orders and 37 families , without assigning a higher taxonomic rank . [ 6 ]
Furthermore, within the Polypodiopsida, the largest grouping, a number of informal clades were recognised, including leptosporangiates, core leptosporangiates, polypods (Polypodiales), and eupolypods (including Eupolypods I and Eupolypods II ). [ 6 ]
In 2014 Christenhusz and Chase , summarising the known knowledge at that time, treated this group as two separate unrelated taxa in a consensus classification; [ 10 ]
These subclasses correspond to Smith's four classes, with Ophioglossidae corresponding to Psilotopsida.
The two major groups previously included in Pteridophyta are phylogenetically related as follows: [ 10 ] [ 11 ] [ 12 ]
Lycopodiophyta
Polypodiophyta – ferns
† Pteridospermatophyta
Gymnospermae
Angiospermae – flowering plants
Pteridophytes consist of two separate but related classes, whose nomenclature has varied. [ 6 ] [ 13 ] The system put forward by the Pteridophyte Phylogeny Group in 2016, PPG I , is: [ 5 ]
In addition to these living groups, several groups of pteridophytes are now extinct and known only from fossils . These groups include the Rhyniopsida , Zosterophyllopsida , Trimerophytopsida , the Lepidodendrales and the Progymnospermopsida .
Modern studies of the land plants agree that seed plants emerged from pteridophytes more closer to ferns than lycophytes . Therefore, pteridophytes do not form a clade but constitute a paraphyletic grade.
Just as with bryophytes and spermatophytes (seed plants), the life cycle of pteridophytes involves alternation of generations . This means that a diploid generation (the sporophyte, which produces spores) is followed by a haploid generation (the gametophyte or prothallus , which produces gametes ). Pteridophytes differ from bryophytes in that the sporophyte is branched and generally much larger and more conspicuous, and from seed plants in that both generations are independent and free-living. The sexuality of pteridophyte gametophytes can be classified as follows:
These terms are not the same as monoecious and dioecious , which refer to whether a seed plant's sporophyte bears both male and female gametophytes, i.e., produces both pollen and seeds, or just one. | https://en.wikipedia.org/wiki/Pteridophyte |
The pterobranchia mitochondrial code (translation table 24 ) is a genetic code used by the mitochondrial genome of Rhabdopleura compacta ( Pterobranchia ). The Pterobranchia are one of the two groups in the Hemichordata which together with the Echinodermata and Chordata form the three major lineages of deuterostomes . AUA translates to isoleucine in Rhabdopleura as it does in the Echinodermata and Enteropneusta while AUA encodes methionine in the Chordata. The assignment of AGG to lysine is not found elsewhere in deuterostome mitochondria but it occurs in some taxa of Arthropoda . [ 1 ] This code shares with many other mitochondrial codes the reassignment of the UGA STOP to tryptophan , and AGG and AGA to an amino acid other than arginine . The initiation codons in Rhabdopleura compacta are ATG and GTG. [ 1 ]
Code 24 is very similar to the mitochondrial code 33 for the Pterobranchia. [ 2 ]
Bases: adenine (A), cytosine (C), guanine (G) and thymine (T) or uracil (U).
Amino acids: Alanine (Ala, A), Arginine (Arg, R), Asparagine (Asn, N), Aspartic acid (Asp, D), Cysteine (Cys, C), Glutamic acid (Glu, E), Glutamine (Gln, Q), Glycine (Gly, G), Histidine (His, H), Isoleucine (Ile, I), Leucine (Leu, L), Lysine (Lys, K), Methionine (Met, M), Phenylalanine (Phe, F), Proline (Pro, P), Serine (Ser, S), Threonine (Thr, T), Tryptophan (Trp, W), Tyrosine (Tyr, Y), Valine (Val, V)
This article incorporates text from the United States National Library of Medicine , which is in the public domain . [ 3 ]
This genetics article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pterobranchia_mitochondrial_code |
Pteroma pendula , the oil palm bagworm or simply bagworm , is a species of bagworm moth found in East and Southeast Asia that infests oil palm plantations. [ 1 ] [ 2 ] [ 3 ]
Pteroma pendula is among most economically damaging pest of oil palm plantations in Malaysia and Indonesia, along with Metisa plana . [ 4 ] [ 5 ] [ 6 ] The caterpillars also feed on other trees and shrubs, including Acacia mangium , Delonix regia , Cassia fistula , and Callerya atropurpurea . [ 2 ] [ 7 ] 31 different species have been identified as host plants for P. pendula . [ 8 ] Insecticides are the favoured method of controlling the moth in most commercial plantations. [ 8 ] Natural enemies such as predators , parasitoids , and fungi kill up to 4.85% of the population. [ 4 ] [ 6 ] [ 5 ]
Survival rate of P. pendula eggs differs based on chosen host plant. [ 9 ] The species has six larval instars. [ 1 ] [ 6 ] Pupae are typically found in middle and lower fronds, while caterpillars go higher in search fresh ones. [ 8 ] Dimorphism has been reported in the pupal and imago stages. [ 1 ] [ 6 ] Males generally live longer than females. [ 9 ]
P. pendula infestations can be detected by a number of symptoms. Holes in leaves and sometimes defoliation are some signs, and discolouration may also result. [ 1 ] [ 7 ] | https://en.wikipedia.org/wiki/Pteroma_pendula |
The table of chords , created by the Greek astronomer, geometer, and geographer Ptolemy in Egypt during the 2nd century AD, is a trigonometric table in Book I, chapter 11 of Ptolemy's Almagest , [ 1 ] a treatise on mathematical astronomy . It is essentially equivalent to a table of values of the sine function. It was the earliest trigonometric table extensive enough for many practical purposes, including those of astronomy (an earlier table of chords by Hipparchus gave chords only for arcs that were multiples of 7 + 1 / 2 ° = π / 24 radians ). [ 2 ] Since the 8th and 9th centuries, the sine and other trigonometric functions have been used in Islamic mathematics and astronomy, reforming the production of sine tables. [ 3 ] Khwarizmi and Habash al-Hasib later produced a set of trigonometric tables.
A chord of a circle is a line segment whose endpoints are on the circle. Ptolemy used a circle whose diameter is 120 parts. He tabulated the length of a chord whose endpoints are separated by an arc of n degrees, for n ranging from 1 / 2 to 180 by increments of 1 / 2 . In modern notation, the length of the chord corresponding to an arc of θ degrees is
As θ goes from 0 to 180, the chord of a θ ° arc goes from 0 to 120. For tiny arcs, the chord is to the arc angle in degrees as π is to 3, or more precisely, the ratio can be made as close as desired to π / 3 ≈ 1.047 197 55 by making θ small enough. Thus, for the arc of 1 / 2 ° , the chord length is slightly more than the arc angle in degrees. As the arc increases, the ratio of the chord to the arc decreases. When the arc reaches 60° , the chord length is exactly equal to the number of degrees in the arc, i.e. chord 60° = 60. For arcs of more than 60°, the chord is less than the arc, until an arc of 180° is reached, when the chord is only 120.
The fractional parts of chord lengths were expressed in sexagesimal (base 60) numerals. For example, where the length of a chord subtended by a 112° arc is reported to be 99,29,5, it has a length of
rounded to the nearest 1 / 60 2 . [ 1 ]
After the columns for the arc and the chord, a third column is labeled "sixtieths". For an arc of θ °, the entry in the "sixtieths" column is
This is the average number of sixtieths of a unit that must be added to chord( θ °) each time the angle increases by one minute of arc, between the entry for θ ° and that for ( θ + 1 / 2 )°. Thus, it is used for linear interpolation . Glowatzki and Göttsche showed that Ptolemy must have calculated chords to five sexigesimal places in order to achieve the degree of accuracy found in the "sixtieths" column. [ 4 ] [ 5 ]
Chapter 10 of Book I of the Almagest presents geometric theorems used for computing chords. Ptolemy used geometric reasoning based on Proposition 10 of Book XIII of Euclid's Elements to find the chords of 72° and 36°. That Proposition states that if an equilateral pentagon is inscribed in a circle, then the area of the square on the side of the pentagon equals the sum of the areas of the squares on the sides of the hexagon and the decagon inscribed in the same circle.
He used Ptolemy's theorem on quadrilaterals inscribed in a circle to derive formulas for the chord of a half-arc, the chord of the sum of two arcs, and the chord of a difference of two arcs. The theorem states that for a quadrilateral inscribed in a circle , the product of the lengths of the diagonals equals the sum of the products of the two pairs of lengths of opposite sides. The derivations of trigonometric identities rely on a cyclic quadrilateral in which one side is a diameter of the circle.
To find the chords of arcs of 1° and 1 / 2 ° he used approximations based on Aristarchus's inequality . The inequality states that for arcs α and β , if 0 < β < α < 90°, then
Ptolemy showed that for arcs of 1° and 1 / 2 °, the approximations correctly give the first two sexagesimal places after the integer part.
Gerald J. Toomer in his translation of the Almagest gives seven entries where some manuscripts have scribal errors, changing one "digit" (one letter, see below). Glenn Elert has made a comparison between Ptolemy's values and the true values (120 times the sine of half the angle) and has found that the root mean square error is 0.000136. But much of this is simply due to rounding off to the nearest 1/3600, since this equals 0.0002777... There are nevertheless many entries where the last "digit" is off by 1 (too high or too low) from the best rounded value. Ptolemy's values are often too high by 1 in the last place, and more so towards the higher angles. The largest errors are about 0.0004, which still corresponds to an error of only 1 in the last sexagesimal digit. [ 6 ]
Lengths of arcs of the circle, in degrees, and the integer parts of chord lengths, were expressed in a base 10 numeral system that used 21 of the letters of the Greek alphabet with the meanings given in the following table, and a symbol, "∠′ " , that means 1 / 2 and a raised circle "○" that fills a blank space (effectively representing zero). Three of the letters, labeled "archaic" in the table below, had not been in use in the Greek language for some centuries before the Almagest was written, but were still in use as numerals and musical notes .
Thus, for example, an arc of 143 + 1 / 2 ° is expressed as ρμγ ∠′. (As the table only reaches 180°, the Greek numerals for 200 and above are not used.)
The fractional parts of chord lengths required great accuracy, and were given in sexagesimal notation in two columns in the table: The first column gives an integer multiple of 1 / 60 , in the range 0–59, the second an integer multiple of 1 / 60 2 = 1 / 3600 , also in the range 0–59.
Thus in Heiberg's edition of the Almagest with the table of chords on pages 48–63 , the beginning of the table, corresponding to arcs from 1 / 2 ° to 7 + 1 / 2 °, looks like this:
Later in the table, one can see the base-10 nature of the numerals expressing the integer parts of the arc and the chord length. Thus an arc of 85° is written as πε ( π for 80 and ε for 5) and not broken down into 60 + 25. The corresponding chord length is 81 plus a fractional part. The integer part begins with πα , likewise not broken into 60 + 21. But the fractional part, 4 60 + 15 60 2 {\textstyle {\tfrac {4}{60}}+{\tfrac {15}{60^{2}}}} , is written as δ , for 4, in the 1 / 60 column, followed by ιε , for 15, in the 1 / 60 2 column.
The table has 45 lines on each of eight pages, for a total of 360 lines. | https://en.wikipedia.org/wiki/Ptolemy's_table_of_chords |
In Euclidean geometry , Ptolemy's theorem is a relation between the four sides and two diagonals of a cyclic quadrilateral (a quadrilateral whose vertices lie on a common circle). The theorem is named after the Greek astronomer and mathematician Ptolemy (Claudius Ptolemaeus). [ 1 ] Ptolemy used the theorem as an aid to creating his table of chords , a trigonometric table that he applied to astronomy.
If the vertices of the cyclic quadrilateral are A , B , C , and D in order, then the theorem states that:
This relation may be verbally expressed as follows:
Moreover, the converse of Ptolemy's theorem is also true:
To appreciate the utility and general significance of Ptolemy’s Theorem, it is especially useful to study its main Corollaries .
Ptolemy's Theorem yields as a corollary a theorem [ 2 ] regarding an equilateral triangle inscribed in a circle.
Given An equilateral triangle inscribed on a circle, and a point on the circle.
The distance from the point to the most distant vertex of the triangle is the sum of the distances from the point to the two nearer vertices.
Proof: Follows immediately from Ptolemy's theorem:
This corollary has as an application an algorithm for computing minimal Steiner trees whose topology is fixed, by repeatedly replacing pairs of leaves of the tree A , B that should be connected to a Steiner point , by the third point C of their equilateral triangle. The unknown Steiner point must lie on arc AB of the circle, and this replacement ensures that, no matter where it is placed, the length of the tree remains unchanged. [ 3 ]
Any square can be inscribed in a circle whose center is the center of the square. If the common length of its four sides is equal to a {\displaystyle a} then the length of the diagonal is equal to a 2 {\displaystyle a{\sqrt {2}}} according to the Pythagorean theorem , and Ptolemy's relation obviously holds.
More generally, if the quadrilateral is a rectangle with sides a and b and diagonal d then Ptolemy's theorem reduces to the Pythagorean theorem. In this case the center of the circle coincides with the point of intersection of the diagonals. The product of the diagonals is then d 2 , the right hand side of Ptolemy's relation is the sum a 2 + b 2 .
Copernicus – who used Ptolemy's theorem extensively in his trigonometrical work – refers to this result as a 'Porism' or self-evident corollary:
A more interesting example is the relation between the length a of the side and the (common) length b of the 5 chords in a regular pentagon. By completing the square , the relation yields the golden ratio : [ 5 ]
If now diameter AF is drawn bisecting DC so that DF and CF are sides c of an inscribed decagon, Ptolemy's Theorem can again be applied – this time to cyclic quadrilateral ADFC with diameter d as one of its diagonals:
whence the side of the inscribed decagon is obtained in terms of the circle diameter. Pythagoras's theorem applied to right triangle AFD then yields "b" in terms of the diameter and "a" the side of the pentagon [ 7 ] is thereafter calculated as
As Copernicus (following Ptolemy) wrote,
The animation here shows a visual demonstration of Ptolemy's theorem, based on Derrick & Herstein (2012). [ 9 ]
Let ABCD be a cyclic quadrilateral .
On the chord BC, the inscribed angles ∠BAC = ∠BDC, and on AB, ∠ADB = ∠ACB.
Construct K on AC such that ∠ABK = ∠CBD; since ∠ABK + ∠CBK = ∠ABC = ∠CBD + ∠ABD, ∠CBK = ∠ABD.
Now, by common angles △ABK is similar to △DBC, and likewise △ABD is similar to △KBC.
Thus AK/AB = CD/BD, and CK/BC = DA/BD;
equivalently, AK⋅BD = AB⋅CD, and CK⋅BD = BC⋅DA.
By adding two equalities we have AK⋅BD + CK⋅BD = AB⋅CD + BC⋅DA, and factorizing this gives (AK+CK)·BD = AB⋅CD + BC⋅DA.
But AK+CK = AC, so AC⋅BD = AB⋅CD + BC⋅DA, Q.E.D. [ 10 ]
The proof as written is only valid for simple cyclic quadrilaterals. If the quadrilateral is self-crossing then K will be located outside the line segment AC. But in this case, AK−CK = ±AC, giving the expected result.
Let the inscribed angles subtended by A B {\displaystyle AB} , B C {\displaystyle BC} and C D {\displaystyle CD} be, respectively, α {\displaystyle \alpha } , β {\displaystyle \beta } and γ {\displaystyle \gamma } , and the radius of the circle be R {\displaystyle R} , then we have A B = 2 R sin α {\displaystyle AB=2R\sin \alpha } , B C = 2 R sin β {\displaystyle BC=2R\sin \beta } , C D = 2 R sin γ {\displaystyle CD=2R\sin \gamma } , A D = 2 R sin ( 180 ∘ − ( α + β + γ ) ) {\displaystyle AD=2R\sin(180^{\circ }-(\alpha +\beta +\gamma ))} , A C = 2 R sin ( α + β ) {\displaystyle AC=2R\sin(\alpha +\beta )} and B D = 2 R sin ( β + γ ) {\displaystyle BD=2R\sin(\beta +\gamma )} , and the original equality to be proved is transformed to
from which the factor 4 R 2 {\displaystyle 4R^{2}} has disappeared by dividing both sides of the equation by it.
Now by using the sum formulae, sin ( x + y ) = sin x cos y + cos x sin y {\displaystyle \sin(x+y)=\sin {x}\cos y+\cos x\sin y} and cos ( x + y ) = cos x cos y − sin x sin y {\displaystyle \cos(x+y)=\cos x\cos y-\sin x\sin y} , it is trivial to show that both sides of the above equation are equal to
Q.E.D.
Here is another, perhaps more transparent, proof using rudimentary trigonometry.
Define a new quadrilateral A B C D ′ {\displaystyle ABCD'} inscribed in the same circle, where A , B , C {\displaystyle A,B,C} are the same
as in A B C D {\displaystyle ABCD} , and D ′ {\displaystyle D'} located at a new point on the same circle, defined by | A D ′ ¯ | = | C D ¯ | {\displaystyle |{\overline {AD'}}|=|{\overline {CD}}|} , | C D ′ ¯ | = | A D ¯ | {\displaystyle |{\overline {CD'}}|=|{\overline {AD}}|} . (Picture triangle A C D {\displaystyle ACD} flipped, so that vertex C {\displaystyle C} moves to vertex A {\displaystyle A} and vertex A {\displaystyle A} moves to vertex C {\displaystyle C} . Vertex D {\displaystyle D} will now be located at a new point D’ on the circle.)
Then, A B C D ′ {\displaystyle ABCD'} has the same edges lengths, and consequently the same inscribed angles subtended by
the corresponding edges, as A B C D {\displaystyle ABCD} , only in a different order. That is, α {\displaystyle \alpha } , β {\displaystyle \beta } and γ {\displaystyle \gamma } , for, respectively, A B , B C {\displaystyle AB,BC} and A D ′ {\displaystyle AD'} .
Also, A B C D {\displaystyle ABCD} and A B C D ′ {\displaystyle ABCD'} have the same area. Then,
Q.E.D.
Choose an auxiliary circle Γ {\displaystyle \Gamma } of radius r {\displaystyle r} centered at D with respect to which the circumcircle of ABCD is inverted into a line (see figure).
Then A ′ B ′ + B ′ C ′ = A ′ C ′ . {\displaystyle A'B'+B'C'=A'C'.} Then A ′ B ′ , B ′ C ′ {\displaystyle A'B',B'C'} and A ′ C ′ {\displaystyle A'C'} can be expressed as A B ⋅ D B ′ D A {\textstyle {\frac {AB\cdot DB'}{DA}}} , B C ⋅ D B ′ D C {\textstyle {\frac {BC\cdot DB'}{DC}}} and A C ⋅ D C ′ D A {\textstyle {\frac {AC\cdot DC'}{DA}}} respectively. Multiplying each term by D A ⋅ D C D B ′ {\textstyle {\frac {DA\cdot DC}{DB'}}} and using D C ′ D B ′ = D B D C {\textstyle {\frac {DC'}{DB'}}={\frac {DB}{DC}}} yields Ptolemy's equality.
Q.E.D.
Note that if the quadrilateral is not cyclic then A', B' and C' form a triangle and hence A'B'+B'C' > A'C', giving us a very simple proof of Ptolemy's Inequality which is presented below.
Embed ABCD in the complex plane C {\displaystyle \mathbb {C} } by identifying A ↦ z A , … , D ↦ z D {\displaystyle A\mapsto z_{A},\ldots ,D\mapsto z_{D}} as four distinct complex numbers z A , … , z D ∈ C {\displaystyle z_{A},\ldots ,z_{D}\in \mathbb {C} } . Define the cross-ratio
Then
with equality if and only if the cross-ratio ζ {\displaystyle \zeta } is a positive real number. This proves Ptolemy's inequality generally, as it remains only to show that z A , … , z D {\displaystyle z_{A},\ldots ,z_{D}} lie consecutively arranged
on a circle (possibly of infinite radius, i.e. a line) in C {\displaystyle \mathbb {C} } if and only if ζ ∈ R > 0 {\displaystyle \zeta \in \mathbb {R} _{>0}} .
From the polar form of a complex number z = | z | e i arg ( z ) {\displaystyle z=\vert z\vert e^{i\arg(z)}} , it follows
with the last equality holding if and only if ABCD is cyclic, since a quadrilateral is cyclic if and only if opposite angles sum to π {\displaystyle \pi } .
Q.E.D.
Note that this proof is equivalently made by observing that the cyclicity of ABCD, i.e. the supplementarity ∠ A B C {\displaystyle \angle ABC} and ∠ C D A {\displaystyle \angle CDA} , is equivalent to the condition
in particular there is a rotation of C {\displaystyle \mathbb {C} } in which this arg {\displaystyle \arg } is 0 (i.e. all three products are positive real numbers), and by which Ptolemy's theorem
is then directly established from the simple algebraic identity
In the case of a circle of unit diameter the sides S 1 , S 2 , S 3 , S 4 {\displaystyle S_{1},S_{2},S_{3},S_{4}} of any cyclic quadrilateral ABCD are numerically equal to the sines of the angles θ 1 , θ 2 , θ 3 {\displaystyle \theta _{1},\theta _{2},\theta _{3}} and θ 4 {\displaystyle \theta _{4}} which they subtend (see Law of sines ). Similarly the diagonals are equal to the sine of the sum of whichever pair of angles they subtend. We may then write Ptolemy's Theorem in the following trigonometric form:
Applying certain conditions to the subtended angles θ 1 , θ 2 , θ 3 {\displaystyle \theta _{1},\theta _{2},\theta _{3}} and θ 4 {\displaystyle \theta _{4}} it is possible to derive a number of important corollaries using the above as our starting point. In what follows it is important to bear in mind that the sum of angles θ 1 + θ 2 + θ 3 + θ 4 = 180 ∘ {\displaystyle \theta _{1}+\theta _{2}+\theta _{3}+\theta _{4}=180^{\circ }} .
Let θ 1 = θ 3 {\displaystyle \theta _{1}=\theta _{3}} and θ 2 = θ 4 {\displaystyle \theta _{2}=\theta _{4}} . Then θ 1 + θ 2 = θ 3 + θ 4 = 90 ∘ {\displaystyle \theta _{1}+\theta _{2}=\theta _{3}+\theta _{4}=90^{\circ }} (since opposite angles of a cyclic quadrilateral are supplementary). Then: [ 11 ]
Let θ 2 = θ 4 {\displaystyle \theta _{2}=\theta _{4}} . The rectangle of corollary 1 is now a symmetrical trapezium with equal diagonals and a pair of equal sides. The parallel sides differ in length by 2 x {\displaystyle 2x} units where:
It will be easier in this case to revert to the standard statement of Ptolemy's theorem:
The cosine rule for triangle ABC.
Let
Then
Therefore,
Formula for compound angle sine (+). [ 12 ]
Let θ 1 = 90 ∘ {\displaystyle \theta _{1}=90^{\circ }} . Then θ 2 + ( θ 3 + θ 4 ) = 90 ∘ {\displaystyle \theta _{2}+(\theta _{3}+\theta _{4})=90^{\circ }} . Hence,
Formula for compound angle sine (−). [ 12 ]
This derivation corresponds to the Third Theorem as chronicled by Copernicus following Ptolemy in Almagest . In particular if the sides of a pentagon (subtending 36° at the circumference) and of a hexagon (subtending 30° at the circumference) are given, a chord subtending 6° may be calculated. This was a critical step in the ancient method of calculating tables of chords. [ 13 ]
This corollary is the core of the Fifth Theorem as chronicled by Copernicus following Ptolemy in Almagest.
Let θ 3 = 90 ∘ {\displaystyle \theta _{3}=90^{\circ }} . Then θ 1 + ( θ 2 + θ 4 ) = 90 ∘ {\displaystyle \theta _{1}+(\theta _{2}+\theta _{4})=90^{\circ }} . Hence
Formula for compound angle cosine (+)
Despite lacking the dexterity of our modern trigonometric notation, it should be clear from the above corollaries that in Ptolemy's theorem (or more simply the Second Theorem ) the ancient world had at its disposal an extremely flexible and powerful trigonometric tool which enabled the cognoscenti of those times to draw up accurate tables of chords (corresponding to tables of sines) and to use these in their attempts to understand and map the cosmos as they saw it. Since tables of chords were drawn up by Hipparchus three centuries before Ptolemy, we must assume he knew of the 'Second Theorem' and its derivatives. Following the trail of ancient astronomers, history records the star catalogue of Timocharis of Alexandria. If, as seems likely, the compilation of such catalogues required an understanding of the 'Second Theorem', then the true origins of the latter disappear thereafter into the mists of antiquity; but it cannot be unreasonable to presume that the astronomers, architects and construction engineers of ancient Egypt may have had some knowledge of it.
The equation in Ptolemy's theorem is never true with non-cyclic quadrilaterals. Ptolemy's inequality is an extension of this fact, and it is a more general form of Ptolemy's theorem. It states that, given a quadrilateral ABCD , then
where equality holds if and only if the quadrilateral is cyclic . This special case is equivalent to Ptolemy's theorem.
Ptolemy's theorem gives the product of the diagonals (of a cyclic quadrilateral) knowing the sides. The following theorem yields the same for the ratio of the diagonals. [ 14 ]
Proof : It is known that the area of a triangle A B C {\displaystyle ABC} inscribed in a circle of radius R {\displaystyle R} is: A = A B ⋅ B C ⋅ C A 4 R {\displaystyle {\mathcal {A}}={\frac {AB\cdot BC\cdot CA}{4R}}}
Writing the area of the quadrilateral as sum of two triangles sharing the same circumscribing circle, we obtain two relations for each decomposition.
Equating, we obtain the announced formula.
Consequence : Knowing both the product and the ratio of the diagonals, we deduce their immediate expressions: | https://en.wikipedia.org/wiki/Ptolemy's_theorem |
Ptychography (/t(ʌ)ɪˈkogræfi/ t(a)i-KO-graf-ee) [ 1 ] is a computational method of microscopic imaging . [ 2 ] [ 3 ] It generates images by processing many coherent interference patterns that have been scattered from an object of interest. Its defining characteristic is translational invariance , which means that the interference patterns are generated by one constant function (e.g. a field of illumination or an aperture stop ) moving laterally by a known amount with respect to another constant function (the specimen itself or a wave field). The interference patterns occur some distance away from these two components, so that the scattered waves spread out and "fold" ( Ancient Greek : πτύξ , "ptychē" is 'fold' [ 4 ] ) into one another as shown in the figure.
Ptychography can be used with visible light , X-rays , extreme ultraviolet (EUV) or electrons . Unlike conventional lens imaging, ptychography is unaffected by lens-induced aberrations or diffraction effects caused by limited numerical aperture . [ 5 ] This is particularly important for atomic-scale wavelength imaging, where it is difficult and expensive to make good-quality lenses with high numerical aperture. Another important advantage of the technique is that it allows transparent objects to be seen very clearly. This is because it is sensitive to the phase of the radiation that has passed through a specimen, and so it does not rely on the object absorbing radiation. In the case of visible-light biological microscopy, this means that cells do not need to be stained or labelled to create contrast.
Although the interference patterns used in ptychography can only be measured in intensity , the mathematical constraints provided by the translational invariance of the two functions (illumination and object), together with the known shifts between them, means that the phase of the wavefield can be recovered by an inverse computation . Ptychography thus provides a comprehensive solution to the so-called " phase problem ". Once this is achieved, all the information relating to the scattered wave ( modulus and phase) has been recovered, and so virtually perfect images of the object can be obtained. There are various strategies for performing this inverse phase-retrieval calculation, including direct Wigner distribution deconvolution (WDD) [ 6 ] and iterative methods. [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] The difference map algorithm developed by Thibault and co-workers [ 10 ] is available in a downloadable package called PtyPy . [ 12 ]
There are many optical configurations for ptychography: mathematically, it requires two invariant functions that move across one another while an interference pattern generated by the product of the two functions is measured. The interference pattern can be a diffraction pattern , a Fresnel diffraction pattern or, in the case of Fourier ptychography , an image . The "ptycho" convolution in a Fourier ptychographic image derived from the impulse response function of the lens .
This is conceptually the simplest ptychographical arrangement. [ 13 ] The detector can either be a long way from the object (i.e. in the Fraunhofer diffraction plane ), or closer by, in the Fresnel regime. An advantage of the Fresnel regime is that there is no longer a very high-intensity beam at the centre of the diffraction pattern, which can otherwise saturate the detector pixels there.
A lens is used to form a tight crossover of the illuminating beam at the plane of the specimen. The configuration is used in the scanning transmission electron microscope (STEM) , [ 14 ] [ 15 ] and often in high-resolution X-ray ptychography. The specimen is sometimes shifted up or downstream of the probe crossover so as to allow the size of the patch of illumination to be increased, thus requiring fewer diffraction patterns to scan a wide field of view .
This uses a wide field of illumination. To provide magnification , a diverging beam is incident on the specimen. An out-of-focus image, which appears as a Fresnel interference pattern, is projected onto the detector. The illumination must have phase distortions in it, often provided by a diffuser that scrambles the phase of the incident wave before it reaches the specimen, otherwise the image remains constant as the specimen is moved, so there is no new ptychographical information from one position to the next. [ 16 ] In the electron microscope , a lens can be used to map the magnified Fresnel image onto the detector.
A conventional microscope is used with a relatively small numerical aperture objective lens . The specimen is illuminated from a series of different angles. Parallel beams coming out of the specimen are brought to a focus in the back focal plane of the objective lens, which is therefore a Fraunhofer diffraction pattern of the specimen exit wave ( Abbe ’s theorem). Tilting the illumination has the effect of shifting the diffraction pattern across the objective aperture (which also lies in the back focal plane). Now the standard ptychographical shift invariance principle applies, except that the diffraction pattern is acting as the object and the back focal plane stop is acting like the illumination function in conventional ptychography. The image is in the Fraunhofer diffraction plane of these two functions (another consequence of Abbe's theorem), just like in conventional ptychography. The only difference is that the method reconstructs the diffraction pattern, which is much wider than the aperture stop limitation. A final Fourier transform must be undertaken to produce the high-resolution image. All the reconstruction algorithms used in conventional ptychography apply to Fourier ptychography, and indeed nearly all the diverse extensions of conventional ptychography have been used in Fourier ptychography. [ 17 ]
A lens is used to make a conventional image. An aperture in the image plane acts equivalently to the illumination in conventional ptychography, while the image corresponds to the specimen. The detector lies in the Fraunhofer or Fresnel diffraction plane downstream of the image and aperture. [ 18 ]
This geometry can be used either to map surface features or to measure strain in crystalline specimens . Shifts in the specimen surface, or the atomic Bragg planes perpendicular to the surface, appear in the phase of the ptychographic image. [ 19 ] [ 20 ] [ 21 ]
Vectorial ptychography needs to be invoked when the multiplicative model of the interaction between the probe and the specimen cannot be described by scalar quantities. [ 22 ] This happens typically when polarized light probes an anisotropic specimen, and when this interaction modifies the state of polarization of light. In that case, the interaction needs to be described by the Jones formalism , [ 23 ] where field and object are described by a two-component complex vector and a 2×2 complex matrix respectively. The optical configuration for vectorial ptychography is similar to that of classical (scalar) ptychography, although a control of light polarization (before and after the specimen) needs to be implemented in the setup. Jones maps of the specimens can be retrieved, allowing the quantification of a wide range of optical properties (phase, birefringence , orientation of neutral axes, diattenuation , etc.). [ 24 ] Similarly to scalar ptychography, the probes used for the measurement can be jointly estimated together with the specimen. [ 25 ] As a consequence, vectorial ptychography is also an elegant approach for quantitative imaging of coherent vectorial light beams (mixing wavefront and polarization features). [ 26 ]
Ptychography can be undertaken without using any lenses at all, [ 13 ] [ 16 ] although most implementations use a lens of some type, if only to condense radiation onto the specimen. The detector can measure high angles of scatter , which do not need to pass through a lens. The resolution is therefore only limited by the maximal angle of scatter that reaches the detector, and so avoids the effects of diffraction broadening due to a lens of small numerical aperture or aberrations within the lens. This is key in X-ray, electron and EUV ptychography, where conventional lenses are difficult and expensive to make.
Ptychography solves for the phase induced by the real part of the refractive index of the specimen, as well as absorption (the imaginary part of the refractive index). This is crucial for seeing transparent specimens that do not have significant natural absorption contrast, for example biological cells (at visible light wavelengths ), [ 27 ] thin high-resolution electron microscopy specimens, [ 28 ] and almost all materials at hard X-ray wavelengths. In the latter case, the ( linear ) phase signal is also ideal for high-resolution X-ray ptychographic tomography . [ 29 ] The strength and contrast of the phase signal also means that far fewer photon or electron counts are needed to make an image : this is very important in electron ptychography, where damage to the specimen is a major issue that must be avoided at all costs. [ 30 ]
Unlike holography , ptychography uses the object itself as an interferometer . It does not require a reference beam . Although holography can solve the image phase problem, it is very difficult to implement in the electron microscope, where the reference beam is extremely sensitive to magnetic interference or other sources of instability. This is why ptychography is not limited by the conventional "information limit" in conventional electron imaging . [ 31 ] Furthermore, ptychographical data is sufficiently diverse to remove the effects of partial coherence that would otherwise affect the reconstructed image. [ 6 ] [ 32 ]
The ptychographical data set can be posed as a blind deconvolution problem . [ 10 ] [ 11 ] [ 33 ] It has sufficient diversity to solve for both the moving functions (illumination and object), which appear symmetrically in the mathematics of the inversion process. This is now routinely done in any ptychographical experiment , even if the illumination optics have been previously well characterised. Diversity can also be used to solve retrospectively for errors in the offsets of the two functions, blurring in the scan, detector faults, like missing pixels, etc.
In conventional imaging, multiple scattering in a thick sample can seriously complicate, or even entirely invalidate, simple interpretation of an image. This is especially true in electron imaging (where multiple scattering is called " dynamical scattering "). Conversely, ptychography generates estimates of hundreds or thousands of exit waves, each of which contains different scattering information. This can be used to retrospectively remove multiple scattering effects. [ 34 ]
The number counts required for a ptychography experiment is the same as for a conventional image, even though the counts are distributed over very many diffraction patterns. This is because dose fractionation applies to ptychography. Maximum-likelihood methods can be employed to reduce the effects of Poisson noise . [ 35 ]
Applications of ptychography are diverse because it can be used with any type of radiation that can be prepared as a quasi-monochromatic propagating wave.
Ptychographic imaging, along with advances in detectors and computing, has resulted in the development of X-ray microscopes. [ 36 ] [ 37 ] Coherent beams are required in order to obtain far-field diffraction patterns with speckle patterns. Coherent X-ray beams can be produced by modern synchrotron radiation sources, free-electron lasers and high-harmonic sources. In terms of routine analysis, X-ray ptychotomography [ 29 ] is today the most commonly used technique. It has been applied to many materials problems including, for example, the study of paint , [ 38 ] imaging battery chemistry , [ 39 ] imaging stacked layers of tandem solar cells , [ 40 ] and the dynamics of fracture . [ 41 ] In the X-ray regime, ptychography has also been used to obtain a 3D mapping of the disordered structure in the white Cyphochilus beetle, [ 42 ] and a 2D imaging of the domain structure in a bulk heterojunction for polymer solar cells. [ 43 ]
Visible-light ptychography has been used for imaging live biological cells and studying their growth, reproduction and motility. [ 44 ] In its vectorial version, it can also be used for mapping quantitative optical properties of anisotropic materials such as biominerals [ 24 ] or metasurfaces [ 45 ]
Electron ptychography is uniquely (amongst other electron imaging modes ) sensitive to both heavy and light atoms simultaneously. It has been used, for example, in the study of nanostructure drug-delivery mechanisms by looking at drug molecules stained by heavy atoms within light carbon nanotubes cages. [ 15 ] With electron beams , shorter-wavelength, higher-energy electrons used for higher-resolution imaging can cause damage to the sample by ionising it and breaking bonds, but electron-beam ptychography has now produced record-breaking images of molybdenum disulphide with a resolution of 0.039 nm using a lower-energy electron beam and detectors that are able to detect single electrons, so atoms can be located with more precision. [ 30 ] [ 46 ]
Ptychography has several applications in the semiconductor industry, including imaging their surfaces using EUV , [ 47 ] their 3D bulk structure using X-rays, [ 48 ] and mapping strain fields by Bragg ptychography, for example, in nanowires . [ 49 ]
The name "ptychography" was coined by Hegerl and Hoppe in 1970 [ 51 ] to describe a solution to the crystallographic phase problem first suggested by Hoppe in 1969. [ 52 ] The idea required the specimen to be highly ordered (a crystal ) and to be illuminated by a precisely engineered wave so that only two pairs of diffraction peaks interfere with one another at a time. A shift in the illumination changes the interference condition (by the Fourier shift theorem ). The two measurements can be used to solve for the relative phase between the two diffraction peaks by breaking a complex-conjugate ambiguity that would otherwise exist. [ 53 ] Although the idea encapsulates the underlying concept of interference by convolution (ptycho) and translational invariance, crystalline ptychography cannot be used for imaging of continuous objects, because very many (up to millions) of beams interfere at the same time, and so the phase differences are inseparable. Hoppe abandoned his concept of ptychography in 1973.
Between 1989 and 2007 Rodenburg and co-workers developed various inversion methods for the general imaging ptychographic phase problem, including Wigner-distribution deconvolution (WDD), [ 6 ] SSB, [ 14 ] the "PIE" iterative method [ 7 ] (a precursor of the "ePIE" algorithm [ 11 ] ), demonstrating proof-of-principles at various wavelengths. [ 14 ] [ 54 ] [ 55 ] Chapman used the WDD inversion method to demonstrate the first implementation of X-ray ptychography in 1996. [ 56 ] The smallness of computers and poor quality of detectors at that time may account for the fact that ptychography was not at first taken up by other workers.
Widespread interest in ptychography only started after the first demonstration of iterative phase-retrieval X-ray ptychography in 2007 at the Swiss Light Source (SLS), [ 55 ] which was initiated by the pioneering Coherent Diffractive Imaging experiment conducted by Miao in 1999. [ 57 ] Progress at X-ray wavelengths was then quick. By 2010, the SLS had developed X-ray ptychotomography, [ 29 ] now a major application of the technique. Thibault, also working at the SLS, developed the difference-map (DM) iterative inversion algorithm and mixed-state ptychography. [ 10 ] [ 32 ] Since 2010, several groups have developed the capabilities of ptychography to characterize and improve reflective [ 58 ] and refractive X-ray optics . [ 59 ] [ 60 ] Bragg ptychography, for measuring strain in crystals , was demonstrated by Hruszkewycz in 2012. [ 19 ] In 2012 it was also shown that electron ptychography could improve on the resolution of an electron lens by a factor of five, [ 61 ] a method which was used in 2018 to provide the highest- resolution transmission image ever obtained [ 30 ] earning a Guinness world record , [ 62 ] and once again in 2021 to achieve an even better resolution. [ 63 ] [ 64 ] [ 65 ] Real-space light ptychography became available in a commercial system for live-cell imaging in 2013. [ 27 ] Fourier ptychography using iterative methods was also demonstrated by Zheng et al. [ 17 ] in 2013, a field which is growing rapidly. The group of Margaret Murnane and Henry Kapteyn at JILA , CU Boulder demonstrated EUV reflection ptychographic imaging in 2014. [ 20 ] | https://en.wikipedia.org/wiki/Ptychography |
Plutonium carbide comes in several stoichiometries (PuC and Pu 2 C 3 ). [ 1 ] It can be used as a nuclear fuel for nuclear reactors in conjunction with uranium carbide . The mixture is also labeled as uranium-plutonium carbide (UPuC).
This inorganic compound –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pu2C3 |
Plutonium arsenide is a binary inorganic compound of plutonium and arsenic with the formula PuAs.
Fusion of stoichiometric amounts of pure substances in a vacuum or helium atmosphere. [ 1 ] The reaction is exothermic:
Passing arsine through heated plutonium hydride :
Plutonium arsenide forms black or dark gray crystals of a cubic system , [ 2 ] space group Fm3m , cell parameters a = 0.5855 nm, Z = 4, structure of the NaCl -type. [ 3 ]
At high pressure (about 35 GPa), a phase transition occurs to a structure of the CsCl -type. [ 4 ]
At a temperature of 129 K, PuAs transforms into a ferromagnetic state. [ 5 ] | https://en.wikipedia.org/wiki/PuAs |
Plutonium(III) bromide is an inorganic salt of bromine and plutonium with the formula PuBr 3 . This radioactive green solid has few uses, however its crystal structure is often used as a structural archetype in crystallography .
The PuBr 3 crystal structure was first published in 1948 by William Houlder Zachariasen . [ 2 ] The compound forms orthorhombic crystals, a type of square antiprism, within which the Pu atoms adopt an 8-coordinate bicapped trigonal prismatic arrangement. Its Pearson symbol is oS16 with the corresponding space group No. 63 (in International Union of Crystallography classification) or Cmcm (in Hermann–Mauguin notation ). The majority of trivalent chloride and bromide salts of lanthanide and actinides crystallise in the PuBr 3 structure.
This inorganic compound –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/PuBr3 |
Plutonium carbide comes in several stoichiometries (PuC and Pu 2 C 3 ). [ 1 ] It can be used as a nuclear fuel for nuclear reactors in conjunction with uranium carbide . The mixture is also labeled as uranium-plutonium carbide (UPuC).
This inorganic compound –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/PuC |
Plutonium(III) chloride is a chemical compound with the formula PuCl 3 . This ionic plutonium salt can be prepared by reacting the metal with hydrochloric acid .
Plutonium atoms in crystalline PuCl 3 are 9 coordinate, and the structure is tricapped trigonal prismatic . It crystallizes as the trihydrate, and forms lavender-blue solutions in water. [ 2 ]
As with all plutonium compounds, it is subject to control under the Nuclear Non-Proliferation Treaty . Due to the radioactivity of plutonium, all of its compounds, PuCl 3 included, are warm to the touch. Such contact is not recommended, since touching the material may result in serious injury.
This inorganic compound –related article is a stub . You can help Wikipedia by expanding it .
This nuclear technology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/PuCl3 |
Plutonium fluoride
Plutonium hexafluoride
Plutonium(III) fluoride or plutonium trifluoride is the chemical compound composed of plutonium and fluorine with the formula PuF 3 . This salt forms violet crystals. Plutonium(III) fluoride has the LaF 3 structure where the coordination around the plutonium atoms is complex and usually described as tri-capped trigonal prismatic. [ 3 ]
A plutonium(III) fluoride precipitation method has been investigated as an alternative to the typical plutonium peroxide method of recovering plutonium from solution, such as that from a nuclear reprocessing plant. [ 4 ] A 1957 study by the Los Alamos National Laboratory reported a less effective recovery than the traditional method, [ 5 ] while a more recent study sponsored by the United States Office of Scientific and Technical Information found it to be one of the more effective methods. [ 6 ]
Plutonium(III) fluoride can be used for manufacture of the plutonium-gallium alloy instead of more difficult to handle metallic plutonium. | https://en.wikipedia.org/wiki/PuF3 |
Plutonium(IV) fluoride is a chemical compound with the formula PuF 4 . This salt is generally a brown solid but can appear a variety of colors depending on the grain size, purity, moisture content, lighting, and presence of contaminants. [ 4 ] [ 5 ] Its primary use in the United States has been as an intermediary product in the production of plutonium metal for nuclear weapons usage. [ 3 ]
Plutonium(IV) fluoride is produced in the reaction between plutonium dioxide (PuO 2 ) or plutonium(III) fluoride (PuF 3 ) with hydrofluoric acid (HF) in a stream of oxygen (O 2 ) at 450 to 600 °C. The main purpose of the oxygen stream is to avoid reduction of the product by hydrogen gas, small amounts of which are often found in HF. [ 6 ]
Laser irradiation of plutonium hexafluoride (PuF 6 ) at wavelengths under 520 nm causes it to decompose into plutonium pentafluoride (PuF 5 ) and fluorine; if this is continued, plutonium(IV) fluoride is obtained. [ 7 ]
In terms of its structure, solid plutonium(IV) fluoride features 8-coordinate Pu centers interconnected by doubly bridging fluoride ligands. [ 8 ]
Reaction of plutonium tetrafluoride with barium, calcium, or lithium at 1200 °C give Pu metal: [ 4 ] [ 5 ] [ 3 ] | https://en.wikipedia.org/wiki/PuF4 |
Plutonium tetrafluoride
Plutonium hexafluoride is the highest fluoride of plutonium , and is of interest for laser enrichment of plutonium, in particular for the production of pure plutonium-239 from irradiated uranium. This isotope of plutonium is needed to avoid premature ignition of low-mass nuclear weapon designs by neutrons produced by spontaneous fission of plutonium-240 .
Plutonium hexafluoride is prepared by fluorination of plutonium tetrafluoride (PuF 4 ) by powerful fluorinating agents such as elemental fluorine. [ 2 ] [ 3 ] [ 4 ] [ 5 ]
This reaction is endothermic . The product forms relatively quickly at temperatures of 750 °C, and high yields may be obtained by quickly condensing the product and removing it from equilibrium. [ 5 ]
It can also be obtained by fluorination of plutonium(III) fluoride , plutonium(IV) oxide , or plutonium(IV) oxalate at approximately 700 °C: [ 4 ] [ 6 ]
Alternatively, plutonium(IV) fluoride oxidizes in an 800-°C oxygen atmosphere to plutonium hexafluoride and plutonium(IV) oxide : [ 7 ]
In 1984, the synthesis of plutonium hexafluoride at near–room-temperatures was achieved through the use of dioxygen difluoride . [ 8 ] [ 9 ] Hydrogen fluoride is not sufficient [ 10 ] : 42 even though it is a powerful fluorinating agent. Room temperature syntheses are also possible by using krypton difluoride [ 11 ] or irradiation with UV light. [ 12 ]
Plutonium hexafluoride is a red-brown volatile solid, [ 1 ] [ 4 ] crystallizing in the orthorhombic crystal system with space group Pnma and lattice parameters a = 995 pm , b = 902 pm , and c = 526 pm . [ 13 ] It sublimes around 60 °C with heat 12.1 kcal/mol to a gas of octahedral molecules [ 2 ] with plutonium-fluorine bond lengths of 197.1 pm. [ 14 ] At high pressure, the gas condenses , with a triple point at 51.58 °C and 710 hPa (530 Torr); the heat of vaporization is 7.4 kcal/mol. [ 13 ] At temperatures below -180 °C, plutonium hexafluoride is colorless. [ 4 ]
Plutonium hexafluoride is paramagnetic , with molar magnetic susceptibility 0.173 mm 3 /mol. [ 15 ]
Plutonium hexafluoride admits six different oscillation modes: stretching modes v 1 , v 2 , and v 3 and rotational modes v 4 , v 5 , and v 6 . [ 16 ] [ 17 ] The PuF 6 Raman spectrum cannot be observed, because irradiation at 564.1 nm induces photochemical decomposition. [ 18 ] Irradation at 532 nm induces fluorescence at 1900 nm and 4800 nm; irradiation at 1064 nm induces fluorescence about 2300 nm. [ 19 ] [ 20 ]
Plutonium hexafluoride is relatively hard to handle, being very corrosive, poisonous, and prone to auto- radiolysis . [ 22 ] [ 23 ] [ 24 ]
PuF 6 is stable in dry air, but reacts vigorously with water, including atmospheric moisture, to form plutonium(VI) oxyfluoride and hydrofluoric acid. [ 3 ] [ 25 ]
It can be stored for a long time in a quartz or pyrex ampoule , provided there are no traces of moisture, the glass has been thoroughly outgassed , and any traces of hydrogen fluoride have been removed from the compound. [ 26 ]
An important reaction involving PuF 6 is the reduction to plutonium dioxide . Carbon monoxide generated from an oxygen-methane flame can perform the reduction. [ 27 ]
Plutonium hexafluoride typically decomposes to plutonium tetrafluoride and fluorine gas. Thermal decomposition does not occur at room temperature, [ 28 ] [ 29 ] but proceeds very quickly at 280 °C. [ 5 ] [ 26 ] In the absence of any external cause for decomposition, the alpha-particle current from plutonium decay will generate auto-radiolysis , at a rate of 1.5%/day ( half-time 1.5 months) in solid phase. [ 5 ] [ 23 ] [ 30 ] Storage in gas phase at pressures 50–100 torr (70–130 mbar) appears to minimize auto-radiolysis, and long-term recombination with freed fluorine does occur. [ 31 ] [ unreliable source? ]
Likewise, the compound is photosensitive , decomposing (possibly to plutonium pentafluoride and fluorine ) under laser irradiation at a wavelength of less than 520 nm. [ 32 ]
Exposure to laser radiation at 564.1 nm or gamma rays will also induce rapid dissolution. [ 18 ] [ 24 ]
Plutonium hexafluoride plays a role in the enrichment of plutonium, in particular for the isolation of the fissile isotope 239 Pu from irradiated uranium. For use in nuclear weaponry , the 241 Pu present must be removed for two reasons:
The separation between plutonium and the americium contained proceeds through reaction with dioxygen difluoride . Aged PuF 4 is fluorinated at room temperature to gaseous PuF 6 , which is separated and reduced back to PuF 4 , whereas any AmF 4 present does not undergo the same conversion. The product thus contains very little amounts of americium, which becomes concentrated in the unreacted solid. [ 33 ]
Separation of the hexafluorides of uranium and plutonium is also important in the reprocessing of nuclear waste . [ 34 ] [ 35 ] [ 36 ] From a molten salt mixture containing both elements, uranium can largely be removed by fluorination to UF 6 , which is stable at higher temperatures, with only small amounts of plutonium escaping as PuF 6 . [ 10 ]
Shortly after plutonium's discovery and isolation in 1940, chemists began to postulate the existence of plutonium hexafluoride. Early experiments, which sought to mimic methods for the construction of uranium hexafluoride , had conflicting results; and definitive proof only appeared in 1942. [ 37 ] The Second World War then interrupted the publication of further research. [ 22 ]
Initial experiments, undertaken with extremely small quantities of plutonium, showed that a volatile plutonium compound would develop in a stream of fluorine gas only at temperatures exceeding 700 °C. Subsequent experiments showed that plutonium on a copper plate volatilized in a 500-°C fluorine stream, and that the reaction rate decreased with atomic number in the series uranium > neptunium > plutonium. [ 38 ] Brown and Hill, using milligram-scale samples of plutonium, completed in 1942 a distillation experiment with uranium hexafluoride, suggesting that higher fluorides of plutonium ought be unstable, and decompose to plutonium tetrafluoride at room temperature . Nevertheless, the vapor pressure of the compound appeared to correspond to that of uranium hexafluoride. [ 39 ] Davidson, Katz, and Orlemann showed in 1943 that plutonium in a nickel vessel volatilized under a fluorine atmosphere, and that the reaction product precipitated on a platinum surface. [ 40 ]
Fisher, Vaslow, and Tevebaugh conjectured that the higher fluorides exhibited a positive enthalpy of formation , that their formation would be endothermic , and consequently only stabilized at high temperatures. [ 41 ]
In 1944, Alan E. Florin [ de ] prepared a volatile compound of plutonium believed to be the elusive plutonium hexafluoride, but the product decomposed prior to identification. The fluid substance would collect onto cooled glass and liquify , but then the fluoride atoms would react with the glass. [ 42 ]
By comparison between uranium and plutonium compounds, Brewer, Bromley, Gilles, and Lofgren computed the thermodynamic characteristics of plutonium hexafluoride. [ 43 ]
In 1950, Florin's efforts finally yielded the synthesis, [ 3 ] [ 44 ] and improved thermodynamic data and a new apparatus for its production soon followed. [ 2 ] Around the same time, British workers also developed a method for the production of PuF 6 . [ 4 ] [ 7 ] | https://en.wikipedia.org/wiki/PuF6 |
Plutonium(IV) oxide , or plutonia , is a chemical compound with the formula Pu O 2 . This high melting-point solid is a principal compound of plutonium . It can vary in color from yellow to olive green, depending on the particle size, temperature and method of production. [ 2 ]
PuO 2 crystallizes in the fluorite motif, with the Pu 4+ centers organized in a face-centered cubic array and oxide ions occupying tetrahedral holes. [ 3 ] PuO 2 owes its utility as a nuclear fuel to the fact that vacancies in the octahedral holes allows room for fission products. In nuclear fission, one atom of plutonium splits into two. The vacancy of the octahedral holes provides room for the new product and allows the PuO 2 monolith to retain its structural integrity. [ citation needed ]
At high temperatures PuO 2 tends to lose oxygen, becoming sub-stoichiometric PuO 2−x , with the introduction of lower valence Pu 3+ . This continues into the molten liquid state where the local Pu-O coordination number drops to predominantly 6-fold, compared to 8-fold in the stoichiometric fluorite structure. [ 4 ]
Plutonium dioxide is a stable ceramic material with an extremely low solubility in water and with a high melting point (2,744 °C). The melting point was revised upwards in 2011 by several hundred degrees, based on evidence from rapid laser melting studies which avoid contamination by any container material. [ 5 ]
As with all plutonium compounds, it is subject to control under the Nuclear Non-Proliferation Treaty .
Plutonium spontaneously oxidizes to PuO 2 in an atmosphere of oxygen. Plutonium dioxide is mainly produced by calcination of plutonium(IV) oxalate, Pu(C 2 O 4 ) 2 ·6H 2 O, at 300 °C. Plutonium oxalate is obtained during the reprocessing of nuclear fuel as plutonium is dissolved in a solution of nitric and hydrofluoric acid . [ 6 ] Plutonium dioxide can also be recovered from molten-salt breeder reactors by adding sodium carbonate to the fuel salt after any remaining uranium is removed from the salt as its hexafluoride.
PuO 2 , along with UO 2 , is used in MOX fuels for nuclear reactors . Plutonium-238 dioxide is used as fuel for several deep-space spacecraft such as the Cassini , Voyager , Galileo and New Horizons probes as well as in the Curiosity and Perseverance rovers on Mars . The isotope decays by emitting α-particles, which then generate heat (see radioisotope thermoelectric generator ). There have been concerns that an accidental re-entry into Earth's atmosphere from orbit might lead to the break-up and/or burn-up of a spacecraft, resulting in the dispersal of the plutonium, either over a large tract of the planetary surface or within the upper atmosphere. However, although at least two spacecraft carrying PuO 2 RTGs have reentered the Earth's atmosphere and burned up ( Nimbus B-1 in May 1968 and the Apollo 13 Lunar Module in April 1970), [ 7 ] [ 8 ] the RTGs from both spacecraft survived reentry and impact intact, and no environmental contamination was noted in either instance; in fact, the Nimbus RTG was recovered intact from the Pacific Ocean seafloor and launched aboard Nimbus 3 one year later. In any case, RTGs since the mid-1960s have been designed to remain intact in the event of reentry and impact, following the 1964 launch failure of Transit 5-BN-3 (the early-generation plutonium RTG on board disintegrated upon reentry and dispersed radioactive material into the atmosphere north of Madagascar , prompting a redesign of all U.S. RTGs then in use or under development). [ 9 ]
Physicist Peter Zimmerman, following up a suggestion by Ted Taylor , calculated that a low-yield (1- kiloton ) nuclear weapon could be made relatively easily from plutonium dioxide. [ 10 ] Such bomb would require a considerably larger critical mass than one made from elemental plutonium (almost three times larger, even with the dioxide at maximum crystal density; if the dioxide were in powder form, as is often encountered, the critical mass would be much higher still), due both to the lower density of plutonium in dioxide compared with elemental plutonium and to the added inert mass of the oxygen contained. [ 11 ]
The behavior of plutonium dioxide in the body varies with the way in which it is taken. When ingested, most of it will be eliminated from the body quite rapidly in body wastes, [ 12 ] but a small part will dissolve into ions in acidic gastric juice and cross the blood barrier, depositing itself in other chemical forms in other organs such as in phagocytic cells of lung, bone marrow and liver. [ 13 ]
In particulate form, plutonium dioxide at a particle size less than 10 μm [ 14 ] is radiotoxic if inhaled due to its strong alpha-emission . [ 15 ] | https://en.wikipedia.org/wiki/PuO2 |
Plutonium silicide is a binary inorganic compound of plutonium and silicon with the chemical formula PuSi. [ 2 ] [ 3 ] [ 4 ] The compound forms gray crystals.
Reaction of plutonium dioxide and silicon carbide :
Reaction of plutonium trifluoride with silicon:
Plutonium silicide forms gray crystals of orthorhombic crystal system , space group Pnma , cell parameters: a = 0.7933 nm, b = 0.3847 nm, c = 0.5727 nm, Z = 4, TiSi type structure.
At a temperature of 72 K, plutonium silicide undergoes a ferromagnetic transition. [ 5 ] | https://en.wikipedia.org/wiki/PuSi |
PubChem is a database of chemical molecules and their activities against biological assays . The system is maintained by the National Center for Biotechnology Information (NCBI), a component of the National Library of Medicine , which is part of the United States National Institutes of Health (NIH). PubChem can be accessed for free through a web user interface . Millions of compound structures and descriptive datasets can be freely downloaded via FTP . PubChem contains multiple substance descriptions and small molecules with fewer than 100 atoms and 1,000 bonds. More than 80 database vendors contribute to the growing PubChem database. [ 2 ]
PubChem was released in 2004 as a component of the Molecular Libraries Program (MLP) of the NIH. As of November 2015, PubChem contains more than 150 million depositor-provided substance descriptions, 60 million unique chemical structures, and 225 million biological activity test results (from over 1 million assay experiments performed on more than 2 million small-molecules covering almost 10,000 unique protein target sequences that correspond to more than 5,000 genes). It also contains RNA interference (RNAi) screening assays that target over 15,000 genes. [ 3 ]
As of August 2018, PubChem contains 247.3 million substance descriptions, 96.5 million unique chemical structures, contributed by 629 data sources from 40 countries. It also contains 237 million bioactivity test results from 1.25 million biological assays, covering >10,000 target protein sequences. [ 4 ]
As of 2020, with data integration from over 100 new sources, PubChem contains more than 293 million depositor-provided substance descriptions, 111 million unique chemical structures, and 271 million bioactivity data points from 1.2 million biological assays experiments. [ 5 ]
PubChem consists of three dynamically growing primary databases. As of 5 November 2020 (number of BioAssays is unchanged):
Searching the databases is possible for a broad range of properties including chemical structure, name fragments, chemical formula , molecular weight , XLogP , and hydrogen bond donor and acceptor count.
PubChem contains its own online molecule editor with SMILES /SMARTS and InChI support that allows the import and export of all common chemical file formats to search for structures and fragments.
Each hit provides information about synonyms, chemical properties, chemical structure including SMILES and InChI strings, bioactivity, and links to structurally related compounds and other NCBI databases like PubMed .
In the text search form the database fields can be searched by adding the field name in square brackets to the search term. A numeric range is represented by two numbers separated by a colon. The search terms and field names are case-insensitive. Parentheses and the logical operators AND, OR, and NOT can be used. AND is assumed if no operator is used.
Example ( Lipinski's Rule of Five ): | https://en.wikipedia.org/wiki/PubChem |
PubGene AS is a bioinformatics company located in Oslo , Norway and is the daughter company of PubGene Inc.
In 2001, PubGene founders demonstrated one of the first [ 1 ] applications of text mining to research in biomedicine (i.e., biomedical text mining ). They went on to create the PubGene public search engine, [ 2 ] exemplifying the approach they pioneered by presenting biomedical terms as graphical networks based on their co-occurrence in MEDLINE texts. The PubGene search engine has since been discontinued and incorporated into a commercial product. [ 2 ] Co-occurrence networks provide a visual overview of possible relationships between terms and facilitate medical literature retrieval for relevant sets of articles implied by the network display. Commercial applications of the technology are available. [ 3 ]
Original development of PubGene technologies was undertaken in collaboration between the Norwegian Cancer Hospital ( Radiumhospitalet ) and the Norwegian University of Science and Technology . The work is supported by the Research Council of Norway and commercialization assisted by Innovation Norway .
PubGene provides CoreMine Medical as a service open to the public. | https://en.wikipedia.org/wiki/PubGene |
Public Interest Research Group in Michigan ( PIRGIM ) is a non-profit organization that is part of the state PIRG organizations.
PIRGIM has a history of working on a variety of issues, such as cleaning Michigan's waterways, [ 1 ] toy safety, [ 2 ] and chemical safety. [ 3 ]
The PIRGs emerged in the early 1970s on U.S. college campuses. The PIRG model was proposed in the book Action for a Change by Ralph Nader and Donald Ross . [ 4 ] Among other early accomplishments, the PIRGs were responsible for much of the Container Container Deposit Legislation in the United States , also known as "bottle bills." [ 5 ] [ 6 ] | https://en.wikipedia.org/wiki/Public_Interest_Research_Group_in_Michigan |
Public Law 114-216 is a federal law of the United States that regulates GMO food labeling. It was enacted on July 29, 2016 when President Obama signed then Senate Bill 764 (S.764) and is codified at 7 U.S.C. ch. 38, subch. V and VI . While the law is officially termed A bill to reauthorize and amend the National Sea Grant College Program Act, and for other purposes , it evolved over time into "the legislative vehicle for a measure concerning bioengineered food disclosure", [ 1 ] which opponents have called the "DARK Act", an acronym for "Deny Americans the Right to Know Act". [ 2 ] [ 3 ] [ 4 ]
The bill was crafted by Senator Pat Roberts (R-KS) and Debbie Stabenow (D-MI). [ 5 ] The "GMO labeling bill" [ 6 ] was introduced on 17 March 2015 by its sponsor, Sen. Roger F. Wicker (R-MS), [ 1 ] cosponsored by Sen. Dan Sullivan (R-AK), and passed Senate and House in June 2016. The law overturned relevant state laws such as Vermont's GMO labeling law that had called for strict and transparent GMO food labeling in Vermont after July 1, 2016. [ 7 ]
Labeling of GMO food is mandated in at least 64 countries, including most European countries, China, Russia, Japan, Brazil, South Africa, and Australia. [ 2 ]
Senate resolution S.744 as originally introduced on 17 March 2015 contained no language to regulate bioengineered foods. [ 8 ] A bill entitled " Safe and Accurate Food Labeling Act of 2015 " was received in the Senate on 24 July 2015 and referred to the Committee on Agriculture, Nutrition, and Forestry (ANF), which resulted in the amendments to Public Law 114-216 seen now; this was done on 7 July 2016 [ 9 ] to "establish a national bioengineered food disclosure standard", whereby bioengineered food (commonly referred to as genetically modified organism or GMO food) is defined as "food that has been genetically engineered in a way that could not be obtained through conventional breeding or found in nature". [ 1 ]
Mitch McConnell introduced on 29 June 2016 the "Roberts GMO bill" named after Pat Roberts who was then Chair of the Committee on ANF, [ 10 ] and Public Law 114-216 thereafter suffered through no less than 40 amendments over the course of one week. [ 11 ]
Public Law 114-216 charges the U. S. Department of Agriculture (USDA) to establish a national mandatory bioengineered food disclosure standard within two years with certain provisions: [ 1 ]
While the FDA is responsible for protecting and promoting public health through the control and supervision of food safety , the agency holds the position that "the use of genetic engineering in the production of food does not present any safety concerns for such foods as a class", and, as there is "an absence of reliable data indicating safety concerns" with GMO foods as a class, voiced no opposition for USDA having the responsibility of regulating GMO food labeling. [ 12 ] The agency commented that the bill has language that may allow GMO material to escape labeling: the bill requires labeling if the food contains "genetic material", but that may exempt secondary products like oil, starches, sweeteners, or proteins derived from GMO substrates. [ 12 ] The agency questioned the specificity of the definition of bioengineered food when it would not apply to GMOs that could also be achieved by "conventional breeding". [ 12 ] The FDA has also voiced some concerns about food information being presented in electronic codes. [ 12 ]
As the details of the law need to be worked out, USDA established a working group by September 2016 "to develop a timeline for rulemaking and to ensure an open and transparent process for effectively establishing this new program, which will increase consumer confidence and understanding of the foods they buy, and avoid uncertainty for food companies and farmers". [ 13 ]
Public Law 114-216 was passed after previous attempts to introduce a national GMO labeling bill had failed. It was fast-tracked without debate or committee review. [ 5 ] [ 14 ] The original bill S. 764 - "A bill to reauthorize and amend the National Sea Grant College Program Act, and for other purposes" - had nothing to do with food and stalled after having passed the Senate. Hollowed out of its content it was replaced with a bill to defund Planned Parenthood . [ 15 ] This bill was then replaced with a bill creating to outlaw state-level GMO labeling and setting a voluntary GMO labeling bill. [ 15 ] When this bill failed, the S. 764 husk was used to rush through the present bill, just in time before the Vermont GMO food labeling requirement would have been activated on July 1. [ 14 ]
Previous attempts to enact a national GMO labeling law included H. R. 1599 in 2017 – the Safe and Accurate Food Labeling Act of 2017 . It was a proposed legislative amendment to the United States Federal Food, Drug and Cosmetic Act . [ 16 ] The act passed the House of Representatives on July 23, 2017 but failed in the Senate. An earlier version of the bill had been originally introduced as H. R. 4432 in 2004. [ 17 ] and attempted to regulate food labeling specifically in view of the introduction of GMO food in the United States.
Katie Hill, White House spokesperson, lauded the bill, "(t)his measure will provide new opportunities for consumers to have access to information about their food". [ 6 ] Proponents have argued for a comprehensive labeling law that applies nationwide instead of a "patchwork" approach state-by-state. [ 18 ] They also feel that a proposed bill will enhance agricultural biotechnology. [ 18 ]
Proponents argue that approved GMO food has undergone extensive testing, is "safe" and that basically labeling is unnecessary. Labeling may discourage consumers to use GMO products when such a choice may be irrational. A lot of consumers express fears that have not been substantiated by science. [ 19 ]
The bill is backed by Grocery Manufacturers Association , Monsanto , and other large food and beverage corporations. [ 3 ]
While GMOs are present in 75-80% of food Americans consume and have been termed "substantially equivalent" to the corresponding non-GMO foods by the FDA, consumers believe that they have a right to know what is in their food. [ 6 ] Thus, a 2013 poll by the New York Times indicated that ninety-three percent of American consumers would like to know if their food has been genetically modified. [ 19 ]
The primary objection to the bill is that manufacturers have the option to use electronic codes in lieu of clear readable labels placed directly on the food package, which they argue hides the information. [ 13 ] The bill allows the use codes such as the QR code as a form of labeling, and opponents see this as impractical as well as discriminatory. They argue that, for instance, low-income families may not be able to access the information. [ 6 ] Critics also oppose the fact that no fines or penalties are included when companies do not follow the law. [ 5 ] Additionally, they are concerned that the bill's stipulation that labeling will be required when foods contain genetic material from genetic modification may exempt many of the highly processed foods and ingredients that are usually derived from genetically-modified crops (such as many seed oils, high-fructose corn syrup , and some refined starches and sweeteners), because such foods are often sufficiently refined that no genetic material remains in them. [ 20 ] Senator Stabenow dismissed this interpretation when it was advanced by the FDA. [ 21 ]
Because clear and accessible labeling is not mandated, some opponents have called this bill and its predecessors the "DARK act" as in "Deny Americans the Right to Know" or "Keep Americans in the DARK". [ 2 ] [ 3 ] [ 4 ] | https://en.wikipedia.org/wiki/Public_Law_114-216 |
A public address system (or PA system ) is an electronic system comprising microphones, amplifiers, loudspeakers, and related equipment. It increases the apparent volume (loudness) of a human voice, musical instrument, or other acoustic sound source or recorded sound or music. PA systems are used in any public venue that requires that an announcer, performer, etc. be sufficiently audible at a distance or over a large area. Typical applications include sports stadiums, public transportation vehicles and facilities, and live or recorded music venues and events. A PA system may include multiple microphones or other sound sources, a mixing console to combine and modify multiple sources, and multiple amplifiers and loudspeakers for louder volume or wider distribution.
Simple PA systems are often used in small venues such as school auditoriums, churches, and small bars. PA systems with many speakers are widely used to make announcements in public, institutional and commercial buildings and locations—such as schools, stadiums, and passenger vessels and aircraft. Intercom systems, installed in many buildings, have both speakers throughout a building, and microphones in many rooms so occupants can respond to announcements. PA and intercom systems are commonly used as part of an emergency communication system .
The term sound reinforcement system generally means a PA system used specifically for live music or other performances. [ 1 ] In Britain , PA systems are often known as tannoys after a company of that name that supplied many of the systems used there. [ 2 ]
From the Ancient Greek era to the nineteenth century, before the invention of electric loudspeakers and amplifiers, megaphone cones were used by people speaking to a large audience, to make their voice project more to a large space or group. Megaphones are typically portable, usually hand-held, cone-shaped acoustic horns used to amplify a person's voice or other sounds and direct it towards a given direction. The sound is introduced into the narrow end of the megaphone, by holding it up to the face and speaking into it. The sound projects out the wide end of the cone. The user can direct the sound by pointing the wide end of the cone in a specific direction. In the 2020s, cheerleading is one of the few fields where a nineteenth century-style cone is still used to project the voice. The device is also called "speaking-trumpet", "bullhorn" or "loud hailer".
In 1910, the Automatic Electric Company of Chicago, Illinois, already a major supplier of automatic telephone switchboards, announced it had developed a loudspeaker, which it marketed under the name of the Automatic Enunciator . Company president Joseph Harris foresaw multiple potential uses, and the original publicity stressed the value of the invention as a hotel public address system, allowing people in all public rooms to hear announcements. [ 3 ] In June 1910, an initial "semi-public" demonstration was given to newspaper reporters at the Automatic Electric Company building, where a speaker's voice was transmitted to loudspeakers placed in a dozen locations "all over the building". [ 4 ]
A short time later, the Automatic Enunciator Company formed in Chicago order to market the new device, and a series of promotional installations followed. [ 5 ] In August 1912 a large outdoor installation was made at a water carnival held in Chicago by the Associated Yacht and Power Boat Clubs of America. Seventy-two loudspeakers were strung in pairs at forty-foot (12 meter) intervals along the docks, spanning a total of one-half mile (800 meters) of grandstands. The system was used to announce race reports and descriptions, carry a series of speeches about "The Chicago Plan", and provide music between races. [ 6 ]
In 1913, multiple units were installed throughout the Comiskey Park baseball stadium in Chicago, both to make announcements and to provide musical interludes, [ 7 ] with Charles A. Comiskey quoted as saying: "The day of the megaphone man has passed at our park." The company also set up an experimental service, called the Musolaphone , that was used to transmitted news and entertainment programming to home and business subscribers in south-side Chicago, [ 8 ] but this effort was short-lived. The company continued to market the enunciators for making announcements in establishments such as hospitals, department stores, factories, and railroad stations, although the Automatic Enunciator Company was dissolved in 1926. [ 5 ]
Peter Jensen and Edwin Pridham of Magnavox began experimenting with sound reproduction in the 1910s. Working from a laboratory in Napa, California , they filed the first patent for a moving coil loudspeaker in 1911. [ 9 ] Four years later, in 1915, they built a dynamic loudspeaker with a 1-inch (2.5 cm) voice coil , a 3-inch (7.6 cm) corrugated diaphragm and a horn measuring 34 inches (86 cm) with a 22-inch (56 cm) aperture. The electromagnet created a flux field of approximately 11,000 Gauss . [ 9 ]
Their first experiment used a carbon microphone . When the 12 V battery was connected to the system, they experienced one of the first examples of acoustic feedback , [ 9 ] a typically unwanted effect often characterized by high-pitched sounds. They then placed the loudspeaker on the laboratory's roof, and claims say that the amplified human voice could be heard 1 mile (1.6 km) away. [ 9 ] Jensen and Pridham refined the system and connected a phonograph to the loudspeaker so it could broadcast recorded music. [ 10 ] They did this on a number of occasions, including once at the Napa laboratory, at the Panama–Pacific International Exposition , [ 9 ] and on December 24, 1915, at San Francisco City Hall alongside Mayor James Rolph . [ 10 ] This demonstration was official presentation of the working system, and approximately 100,000 people gathered to hear Christmas music and speeches "with absolute distinctness". [ 9 ]
The first outside broadcast was made one week later, again supervised by Jensen and Pridham. [ 1 ] [ 11 ] On December 30, when Governor of California Hiram Johnson was too ill to give a speech in person, loudspeakers were installed at the Civic Auditorium in San Francisco , connected to Johnson's house some miles away by cable and a microphone, from where he delivered his speech. [ 9 ] Jensen oversaw the governor using the microphone while Pridham operated the loudspeaker.
The following year, Jensen and Pridham applied for a patent for what they called their "Sound Magnifying Phonograph". Over the next two years they developed their first valve amplifier. In 1919 this was standardized as a 3-stage 25 watt amplifier. [ 9 ]
This system was used by former US president William Howard Taft at a speech in Grant Park , Chicago , and first used by a current president when Woodrow Wilson addressed 50,000 people in San Diego, California . [ 11 ] [ 12 ] Wilson's speech was part of his nationwide tour to promote the establishment of the League of Nations . [ 13 ] It was held on September 9, 1919, at City Stadium . As with the San Francisco installation, Jensen supervised the microphone and Pridham the loudspeakers. Wilson spoke into two large horns mounted on his platform, which channelled his voice into the microphone. [ 13 ] Similar systems were used in the following years by Warren G. Harding and Franklin D. Roosevelt . [ 9 ]
By the early 1920s, Marconi had established a department dedicated to public address and began producing loudspeakers and amplifiers to match a growing demand. [ 9 ] In 1925, George V used such a system at the British Empire Exhibition , addressing 90,000 via six long-range loudspeakers. [ 9 ] This public use of loudspeakers brought attention to the possibilities of such technology. The 1925 Royal Air Force Pageant at Hendon Aerodrome used a Marconi system to allow the announcer to address the crowds, as well as amplify the band. [ 9 ] In 1929, the Schneider Trophy race at Calshot Spit used a public address system that had 200 horns, weighing a total of 20 tons . [ 9 ]
Engineers invented the first loud, powerful amplifier and speaker systems for public address systems and movie theaters . These large PA systems and movie theatre sound systems were very large and very expensive, and so they could not be used by most touring musicians. After 1927, smaller, portable AC mains-powered PA systems that could be plugged into a regular wall socket "quickly became popular with musicians"; indeed, "... Leon McAuliffe (with Bob Wills ) still used a carbon mic and a portable PA as late as 1935." During the late 1920s to mid-1930s, small portable PA systems and guitar combo amplifiers were fairly similar. These early amps had a "single volume control and one or two input jacks, field coil speakers" and thin wooden cabinets; remarkably, these early amps did not have tone controls or even an on-off switch. [ 14 ] Portable PA systems that could be plugged into wall sockets appeared in the early 1930s, when the introduction of electrolytic capacitors and rectifier tubes enabled economical built-in power supplies that could plug into wall outlets. Previously, amplifiers required heavy multiple battery packs. [ 15 ]
In the 1960s, an electric-amplified version of the megaphone, which used a loudspeaker, amplifier and a folded horn, largely replaced the basic cone-style megaphone. Small handheld, battery-powered electric megaphones are used by fire and rescue personnel, police, protesters, and people addressing outdoor audiences. With many small handheld models, the microphone is mounted at the back end of the device, and the user holds the megaphone in front of her/his mouth to use it, and presses a trigger to turn on the amplifier and loudspeaker. Larger electric megaphones may have a microphone attached by a cable, which enables a person to speak without having their face obscured by the flared horn.
The simplest, smallest PA systems consist of a microphone, an amplifier, and one or more loudspeakers. PA systems of this type, often providing 50 to 200 watts of power, are often used in small venues such as school auditoriums, churches, and coffeehouse stages. Small PA systems may extend to an entire building, such as a restaurant, store, elementary school or office building. A sound source such as a compact disc player or radio may be connected to a PA system so that music can be played through the system. Smaller, battery-powered 12 volt systems may be installed in vehicles such as tour buses or school buses, so that the tour guide and/or driver can speak to all the passengers. Portable systems may be battery powered and/or powered by plugging the system into an electric wall socket. These may also be used for by people addressing smaller groups such as information sessions or team meetings. Battery-powered systems can be used by guides who are speaking to clients on walking tours.
Public address systems consist of input sources (microphones, sound playback devices, etc.), amplifiers , control and monitoring equipment (e.g., LED indicator lights, VU meters, headphones), and loudspeakers . Usual input include microphones for speech or singing, direct inputs from musical instruments, and a recorded sound playback device. In non-performance applications, there may be a system that operators or automated equipment uses to select from a number of standard prerecorded messages. These input sources feed into preamplifiers and signal routers that direct the audio signal to selected zones of a facility (e.g., only to one section of a school). The preamplified signals then pass into the amplifiers. Depending on local practices, these amplifiers usually amplify the audio signals to 50 V, 70 V, or 100 V speaker line level. [ 16 ] Control equipment monitors the amplifiers and speaker lines for faults before it reaches the loudspeakers. This control equipment is also used to separate zones in a PA system. The loudspeaker converts electrical signals into sound.
Some PA systems have speakers that cover more than one building, extending to an entire campus of a college, office or industrial site, or an entire outdoor complex (e.g., an athletic stadium). A large PA system may also be used as an alert system during an emergency.
PA systems by size and subwoofer approach
[ 17 ]
Some private branch exchange (PBX) telephone systems use a paging facility that acts as a liaison between the telephone and a PA amplifier. In other systems, paging equipment is not built into the telephone system. Instead the system includes a separate paging controller connected to a trunk port of the telephone system. The paging controller is accessed as either a designated directory number or central office line. In many modern systems, the paging function is integrated into the telephone system, so the system can send announcements to the phone speakers.
Many retailers and offices choose to use the telephone system as the sole access point for the paging system, because the features are integrated. Many schools and other larger institutions are no longer using the large, bulky microphone PA systems and have switched to telephone system paging, as it can be accessed from many different points in the school.
PA over IP are PA paging and intercom systems that use an Internet Protocol (IP) network, instead of a central amplifier, to distribute the audio signal to paging locations across a building or campus, or anywhere else in the reach of the IP network, including the Internet. Network-attached amplifiers and intercom units are used to provide the communication function. At the transmission end, a computer application transmits a digital audio stream via the local area network, using audio from the computer's sound card inputs or from stored audio recordings. At the receiving end, either specialized intercom modules (sometimes known as IP speakers ) receive these network transmissions and reproduce the analog audio signal. These are small, specialized network appliances addressable by an IP address, just like any other computer on the network. [ 18 ]
A 2-way radio wireless PA receiver and horn speaker is designed to facilitate the direct delivery of voice messages from a base station or mobile 2-way radio to a PA speaker located at distances that can measure in miles. The receiver and PA speaker combination is ideal in situations where traditional hard-wired PA installations are impractical, prohibitively expensive, or temporary. These receivers operate in business-band UHF and VHF 2-way licensed frequency bands, or in the MURS unlicensed frequencies. Installation requires setting the frequency you want to use on both the radio and the PA system, plus powering the wireless PA receivers. [ 19 ]
Wireless mobile telephony (WMT) PA systems are PA paging and intercom systems that use any form of wireless mobile telephony system such as GSM networks instead of a centralized amplifier to distribute the audio signal to paging locations across a building or campus, or other location. The GSM mobile networks are used to provide the communication function. At the transmission end, a PSTN Telephone, mobile phone, VOIP phone or any other communication device that can access and make audio calls to a GSM-based mobile SIM card can communicate with it. At the receiving end, a GSM transceiver receives these network transmissions and reproduces the analogue audio signal via a Power Amplifier and speaker. This was pioneered by Stephen Robert Pearson of Lancashire, England who was granted patents for the systems, which also incorporate control functionality. Using a WMT (GSM) network means that live announcements can be made to anywhere in the world where there is WMT connectivity. The patents cover all forms of WMT i.e., 2G, 3G, 4G ..... ××G. A UK company called Remvox Ltd (Remote Voice experience) has been appointed under license to develop and manufacture products based on the technology.
A Long-Line Public Address (LLPA) system is any public address system with a distributed architecture, normally across a wide geographic area. Systems of this type are commonly found in the rail, light rail, and metro industries, and let announcements be triggered from one or several locations to the rest of the network over low bandwidth legacy copper, normally PSTN lines using DSL modems , or media such as optical fiber , or GSM-R , or IP-based networks. [ 20 ]
Rail systems typically have an interface with a passenger information system (PIS) server, at each station. These are linked to train describers, which state the location of rolling stock on the network from sensors on trackside signaling equipment. The PIS invokes a stored message to play from a local or remote digital voice announcement system, or a series of message fragments to assemble in the correct order , for example: " / the / 23.30 / Great_Western_Railway / Night_Riviera_sleeper_service / from / London_Paddington / to / Penzance / .... / will depart from platform / one / this train is formed of / 12_carriages /." Messages are routed via an IP network and are played on local amplification equipment. Taken together, the PA, routing, DVA, passenger displays and PIS interface are referred to as the customer information system (CIS) , a term often used interchangeably with passenger information system . [ citation needed ]
Small clubs, bars and coffeehouses use a fairly simple set-up, with front of house speaker cabinets (and subwoofers, in some cases) aimed at the audience, and monitor speaker cabinets aimed back at the performers so they can hear their vocals and instruments. In many cases, front of house speakers are elevated, either by mounting them on poles or by "flying" them from anchors in the ceiling. The Front of House speakers are elevated to prevent the sound from being absorbed by the first few rows of audience members. The subwoofers do not need to be elevated, because deep bass is omnidirectional. In the smallest coffeehouses and bars, the audio mixer may be onstage so that the performers can mix their own sound levels. [ 21 ] In larger bars, the audio mixer may be located in or behind the audience seating area, so that an audio engineer can listen to the mix and adjust the sound levels. The adjustments to the monitor speaker mix may be made by a single audio engineer using the main mixing board, or they may be made by a second audio engineer who uses a separate mixing board.
For popular music concerts, a more powerful and more complicated PA System is used to provide live sound reproduction . In a concert setting, there are typically two complete PA systems: the "main" system and the "monitor" system. Each system consists of a mixing board, sound processing equipment, amplifiers, and speakers. The microphones that are used to pick up vocals and amplifier sounds are routed through both the main and monitor systems. Audio engineers can set different sound levels for each microphone on the main and monitor systems. For example, a backup vocalist whose voice has a low sound level in the main mix may ask for a much louder sound level through their monitor speaker, so they can hear their singing.
At a concert using live sound reproduction, sound engineers and technicians control the mixing boards for the "main" and "monitor" systems, adjusting tone, levels, and overall volume.
Touring productions travel with relocatable large line-array PA systems, sometimes rented from an audio equipment hire company. The sound equipment moves from venue to venue along with various other equipment such as lighting and projection.
All PA systems have the potential for audio feedback , which occurs when a microphone picks up sound from the speakers, which is re-amplified and sent through the speakers again. It often sounds like a loud high-pitched squeal or screech, and can occur when the volume of the system is turned up too high. Feedback only occurs when the loop gain of the feedback loop is greater than one, so it can always be stopped by reducing the volume sufficiently.
Sound engineers take several steps to maximize gain before feedback , including keeping microphones at a distance from speakers, ensuring that directional microphones are not pointed towards speakers, keeping the onstage volume levels down, and lowering gain levels at frequencies where the feedback is occurring, using a graphic equalizer , a parametric equalizer , or a notch filter . Some 2010s-era mixing consoles and effects units have automatic feedback preventing circuits.
Feedback prevention devices detect the start of unwanted feedback and use a precise notch filter to lower the gain of the frequencies that are feeding back. Some automated feedback detectors require the user to "set" the feedback-prone frequencies by purposely increasing gain (during a sound check) until some feedback starts to occur. This process is often referred to as "a ring out" or "an EQ" of a room/venue. The device then retains these frequencies in its memory and it stands by ready to cut them. Some automated feedback prevention devices can detect and reduce new frequencies other than those found in the sound check. | https://en.wikipedia.org/wiki/Public_address_system |
Public Analysts are scientists in the British Isles whose principal task is to ensure the safety and correct description of food by testing for compliance with legislation. [ 1 ] Most Public Analysts are also Agricultural Analysts who carry out similar work on animal feedingstuffs and fertilisers. [ 2 ] Nowadays this includes checking that the food labelling is accurate. They also test drinking water, and may carry out chemical and biological tests on other consumer products. [ 3 ] While much of the work is done by other scientists and technicians in the laboratory, the Public Analyst has legal responsibility for the accuracy of the work and the validity of any opinion expressed on the results reported. The UK-based Association of Public Analysts includes members with similar roles if different titles in other countries. [ 4 ]
The office of Public Analyst was established by the Adulteration of Food and Drink Act 1860 ( 23 & 24 Vict. c. 84), the first three appointments being in London , Birmingham and Dublin . [ 1 ] The first Scottish analyst was Henry Littlejohn in Edinburgh in 1862, who, with a strong medicinal background and brilliant mind, established many of the critical foundations of public analysis. The Sale of Food and Drugs Act 1875 ( 38 & 39 Vict. c. 63) made food analysis compulsory and the Sale of Food and Drugs Act 1899 ( 62 & 63 Vict. c. 51) extended its scope. Sampling officers generally operated through local public health or sanitary committees . By 1894 there were 99 public analysts overseeing 237 English and Welsh districts. The City of London Corporation had three food inspectors and a wharf and warehouse inspector in 1908. Bradford employed an inspector who made 756 visits to fish and chip shops in 1915. In the 1930s the staff in Birmingham comprised three qualified assistants, a clerk and a laboratory attendant.
The Nuisances Removal Act for England 1855 ( 18 & 19 Vict. c. 121) and the Public Health Act 1875 ( 38 & 39 Vict. c. 55) gave authority for taking food samples "at all reasonable times". Inspectors, police constables and samplers were responsible for taking food samples, which were divided into three parts, for the vendors, the inspectors and the analysts and sealed into bottles. Food systems were engineered to allow inspection through portals, manholes and windows. Prosecution was not common though fines and prison sentences were not unknown. Adulteration rates fell from 13.8% of samples in 1879 to 4.8% in 1930. Inspectors were empowered to follow milk to sources outside their formal jurisdiction in checking for infection with tuberculosis . Sanitary authorities were required to register all dairies and enforce cleanliness regulations. [ 5 ]
The Manchester Corporation (General Powers) Act 1899 ( 62 & 63 Vict. c. clxxxviii), as amended in 1904, contained what were known as `milk clauses', which empowered officials to prosecute anyone who knowingly sold milk from cows with tuberculosis of the udder, to demand the isolation of infected cows and notification of any cow exhibiting signs of tuberculosis of the udder and to inspect the cows and take samples from herds which supplied milk to the city. By 1910 these provisions had been copied by 67 boroughs and 24 urban districts . [ 6 ]
The Society of Public Analysts was established in 1874, later becoming the Society for Analytical Chemistry and joining with other societies to form the Royal Society of Chemistry in 1980. [ 7 ]
Since the separation of the UK and Ireland, the function of the Public Analyst operates under different legislation, but the term and general duties are the same. The original work was chemical testing, and this is still a major part, but nowadays microbiological examination of food is an important activity, particularly in Scotland, where Public Analyst laboratories also carry out a statutory Food Examiner role.
The primary UK legislation is the Food Safety Act 1990 . All local authorities are required to appoint a Public Analyst, [ 8 ] although there have always been fewer Public Analysts and their laboratories than local authorities, most being shared by a number of local authorities. On the UK mainland there has always been a mixture of public sector and private sector laboratories. This remains the case today - but they all provide an equivalent service, and avoidance of conflicts of interest are ensured by the statutory terms of appointment. There is a statutory qualification requirement [ 9 ] for Public Analysts, known as the Mastership in Chemical Analysis (MChemA), awarded by the Royal Society of Chemistry . This is a specialist postgraduate qualification by examination that verifies knowledge and understanding of food and its potential defects, interpretation of food law, and the application and interpretation of chemical analysis for food law enforcement.
The Public Analysts’ laboratories must be third-party accredited to International Standard BS EN ISO/IEC 17025:2017. [ 10 ]
In the mid 1980s there were some 40 Public Analyst Laboratories in the UK with over 100 appointed Public Analysts. By 1993 that had reduced to 34 Laboratories and around 80 Public Analysts, and by 2010 the number of Public Analyst Laboratories had reduced to 22 [ 11 ] with only about 26 Public Analysts. As of 2022 there are 15 Public Analyst laboratories remaining in the UK. [ 12 ] In part, the reduction in number of laboratories over the decades has been due to rationalisation and benefits from economies of scale; however, by a larger part, it has arisen due to lack of adequate funding. Although some of the remaining laboratories are larger than many that no longer exist, the overall capacity of the system is now far less than it used to be.
Enforcement of food law in the UK is done by local authorities, principally their environmental health officers and trading standards officers . Whilst these officers are empowered to take samples of food, the actual assessment in terms of chemical analysis or microbiological examination and subsequent interpretation that are necessary to determine whether a food complies with legislation, is carried out by Public Analysts and Food Examiners respectively, scientists whose qualifications and experience are specified by regulations.
Public Analyst Laboratories in Cork , Dublin and Galway provide an analytical service to the Food Safety Authority . [ 1 ]
There is one Public Analyst Laboratory in each of Guernsey , Isle of Man and Jersey serving the needs of these islands.
There is also one Public Analyst Laboratory in Australia.
The Public Analyst runs a laboratory which will:
In addition to their central rôle in relation to food law enforcement, Public Analysts provide expert scientific support to local authorities and the private sector in various other areas, for example they:
Sampling is largely outside the control of the Public Analyst.
Local authorities have a duty to check the safety of food and to provide adequate protection of the consumer. To achieve that, they devise sampling plans, seeking to balance their need to monitor food against limited resources and other demands on their budgets. A typical sampling plan for a local authority might include samples of the following: | https://en.wikipedia.org/wiki/Public_analyst |
Public health laboratories (PHLs) or National Public Health Laboratories (NPHL) are governmental reference laboratories that protect the public against diseases and other health hazards. The 2005 International Health Regulations came into force in June 2007, with 196 binding countries that recognised that certain public health incidents, extending beyond disease, ought to be designated as a Public Health Emergency of International Concern (PHEIC), as they pose a significant global threat. The PHLs serve as national hazard detection centres, and forward these concerns to the World Health Organization .
In 2007, Haim Hacham et al. published a paper addressing the need for and the process of international standardised accreditation for laboratory proficiency in Israel. [ 1 ] With similar efforts, both the Japan Accreditation Board for Conformity Assessment (JAB) and the European Communities Confederation of Clinical Chemistry and Laboratory Medicine (EC4) have validated and convened ISO 15189 Medical laboratories — Requirements for quality and competence , respectively. [ 2 ] [ 3 ]
In 2006, Spitzenberger and Edelhäuser expressed concerns that ISO accreditation may include obstacles arising from new emerging medical devices and the new approach of assessment; in so doing, they indicate the time dependence of standards. [ 4 ]
The Public Health Laboratory Service (PHLS) was established as part of the National Health Service in 1946. An Emergency Public Health Laboratory Service was established in 1940 as a response to the threat of bacteriological warfare. There was originally a central laboratory at Colindale and a network of regional and local laboratories. By 1955 there were about 1000 staff. These laboratories were primarily preventive with an epidemiological focus. They were, however, in some places located with hospital laboratories which had a diagnostic focus. [ 6 ]
The PHLS was replaced by the Health Protection Agency in 2003; [ citation needed ] the HPA was disbanded and in its stead was constituted Public Health England, which later became the UK Health Security Agency in 2021.
United States laboratory networks and organizations | https://en.wikipedia.org/wiki/Public_health_laboratory |
Public interest design is a human-centered [ 1 ] and participatory design practice [ 2 ] that places emphasis on the “ triple bottom line ” of sustainable design that includes ecological, economic, and social issues and on designing products, structures, and systems that address issues such as economic development and the preservation of the environment. Projects incorporating public interest design focus on the general good of the local citizens with a fundamentally collaborative perspective. [ 3 ]
Starting in the late 1990s, several books, convenings, and exhibitions have generated new momentum and investment in public interest design. Since then, public interest design—frequently described as a movement or field—has gained public recognition. [ 4 ]
Public interest design grew out of the community design movement, which got its start in 1968 after American civil rights leader Whitney Young issued a challenge to attendees of the American Institute of Architects (AIA) national convention: [ 5 ]
". . . you are not a profession that has distinguished itself by your social and civic contributions to the cause of civil rights, and I am sure this does not come to you as any shock. You are most distinguished by your thunderous silence and your complete irrelevance . [ 6 ] "
The response to Young’s challenge was the establishment of community design centers (CDCs) across the United States. [ 7 ] CDCs, which were often established with the support of area universities, [ 8 ] provided a variety of design services – such as affordable housing - within their own neighborhoods.
In architecture schools, “design/build programs” provided outreach to meet local design needs, particularly in low-income and underserved areas. [ 8 ] One of the earliest design/build programs was Yale University ’s Vlock Building Project. The project, which was initiated by students at Yale University School of Architecture in 1967, requires graduate students to design and build low-income housing. [ 9 ]
One of the most publicized programs is the Auburn University Rural Studio design/build program, which was founded in 1993. [ 2 ] [ 10 ] [ 11 ] Samuel Mockbee and D.K. Ruth created the program to inspire hands-on community-outreach and service-based architectural opportunities for students. [ 12 ] The program gained traction due to Mockbee investing in the low-income housing aesthetics — an aspect previously downplayed in architectural design of houses for the poor. [ 12 ] Mockbee and Ruth expressed their understanding of the communities through their architectural designs; the visuals and functionality address the needs of the citizens. [ 12 ] The Rural Studio’s first project, Bryant House, was completed in 1994 for $16,500. [ 13 ]
Interest in public interest design – particularly socially responsible architecture – began to grow during the 1990s and continued into the first decade of the new millennium in reaction to the expansive globalization. [ 14 ] Conferences, books, and exhibitions began to showcase the design work being done beyond the community design centers, [ 2 ] which had greatly decreased in numbers since their peak in the seventies. [ 8 ]
Non-profit organizations – including Architecture for Humanity , BaSiC Initiative, Design Corps, Public Architecture , Project H, Project Locus, and MASS Design Group – began to provide design services that served a larger segment of the population than had been served by traditional design professions. [ 2 ] [ 15 ] [ 16 ]
Many public interest design organizations also provide training and service-learning programs for architecture students and graduates. In 1999, the Enterprise Rose Architectural Fellowship was established, [ 2 ] giving young architects the opportunity to work on three-year-long design and community development projects in low-income communities. [ 17 ]
Two of the earliest formal public interest design programs include the Gulf Coast Community Design Studio at Mississippi State University [ 2 ] and the Public Interest Design Summer Program at the University of Texas [ 18 ] . [ 19 ] In February 2015, Portland State University launched the first graduate certificate program in Public Interest Design in the United States. [ 20 ]
The first professional-level training was conducted in July 2011 by the Public Interest Design Institute (PIDI) and held at the Harvard Graduate School of Design . [ 21 ]
Also in 2011, a survey of American Institute of Architects (AIA), 77% of AIA members agreed that the mission of the professional practice of public interest design could be defined as the belief that every person should be able to live in a socially, economically, and environmentally healthy community. [ 22 ] [ 23 ]
The annual Structures for Inclusion conference showcases public interest design projects from around the world. The first conference, which was held in 2000, was called “Design for the 98% Without Architects." [ 2 ] Speaking at the conference, Rural Studio co-founder Samuel Mockbee challenged attendees to serve a greater segment of the population: “I believe most of us would agree that American architecture today exists primarily within a thin band of elite social and economic conditions [ 24 ] ...in creating architecture, and ultimately community, it should make no difference which economic or social type is served, as long as the status quo of the actual world is transformed by an imagination that creates a proper harmony for both the affluent and the disadvantaged. [ 24 ] "
In 2007, the Cooper Hewitt National Design Museum held an exhibition, titled “Design for the Other 90%,” curated by Cynthia Smith. Following the success of this exhibit, Smith developed the "Design Other 90" initiative into an ongoing series, the second of which was titled “Design for the Other 90%: CITIES” [ 25 ] (2011), held at the United Nations headquarters. In 2010, Andres Lipek of the Museum of Modern Art in New York curated an exhibit, called “Small Scale, Big Change: New Architectures of Social Engagement. [ 26 ] ” [ 2 ]
One of the oldest professional networks related to public interest design is the professional organization Association for Community Design (ACD), which was founded in 1977. [ 2 ] [ 27 ]
In 2005, adopting a term coined by architect Kimberly Dowdell, the Social Economic Environmental Design (SEED) Network was co-founded by a group of community design leaders, [ 2 ] during a meeting hosted by the Loeb Fellowship at the Harvard Graduate School of Design. The SEED Network established a common set of five principles and criteria for practitioners of public interest design. An evaluation tool called the SEED Evaluator is available to assist designers and practitioners in developing projects that align with SEED Network goals and criteria. [ citation needed ]
In 2006, the Open Architecture Network was launched by Architecture for Humanity in conjunction with co-founder Cameron Sinclair 's TED Wish. [ 28 ] [ non-primary source needed ] Taking on the name Worldchanging in 2011, the network is an open-source community dedicated to improving living conditions through innovative and sustainable design. Designers of all persuasions can share ideas, designs and plans as well as collaborate and manage projects. while protecting their intellectual property rights using the Creative Commons "some rights reserved" licensing system.
In 2007, DESIGN 21: Social Design Network, an online platform built in partnership with UNESCO, was launched.
In 2011, the Design Other 90 Network was launched by the Cooper-Hewitt, National Design Museum, in conjunction with its Design with the Other 90%: CITIES exhibition.
In 2012, IDEO.org, with the support of The Bill & Melinda Gates Foundation , launched HCD Connect, a network for social sector leaders committed to human-centered design. In this context, human-centered design begins with the end-user of a product, place, or system — taking into account their needs, behaviors and desires. The fast-growing professional network of 15,000 builds on "The Human-Centered Design Toolkit," [ 29 ] [ non-primary source needed ] which was designed specifically for people, nonprofits, and social enterprises that work with low-income communities throughout the world. People using the HCD Toolkit or human-centered design in the social sector now have a place to share their experiences, ask questions, and connect with others working in similar areas or on similar challenges.
Books advocating public interest design: | https://en.wikipedia.org/wiki/Public_interest_design |
In telecommunication, a public land mobile network ( PLMN ) is a combination of wireless communication services offered by a specific operator in a specific country. [ 1 ] [ 2 ] A PLMN typically consists of several cellular technologies like GSM / 2G , UMTS / 3G , LTE / 4G , NR/5G, offered by a single operator within a given country, often referred to as a cellular network .
A PLMN is identified by a globally unique PLMN code, which consists of a MCC (Mobile Country Code) and MNC (Mobile Network Code) . Hence, it is a five- to six-digit number identifying a country, and a mobile network operator in that country, usually represented in the form 001-01 or 001–001.
A PLMN is part of a:
Note that an MNC can be of two-digit form and three-digit form with leading zeros. It is administered by the respective national numbering plan administrator. [ 3 ] From PLMN assignments, it is apparent that such dualities of two-digit and three-digit MNCs with the same number value are avoided (see the list of mobile country codes and mobile network codes ). An example for an actual three-digit/two-digit MNC with leading zeros is in Bermuda MCC , 350-007 and 350-00, 350-01.
The IMSI , which identifies a SIM or USIM for one subscriber, typically starts with the PLMN code. For example, an IMSI belonging to the PLMN 262-33 would look like 262330000000001. Mobile phones use this to detect roaming , so that a mobile phone subscribed on a network with a PLMN code that mismatches the start of the USIM's IMSI will typically display an "R" on the icon that indicates connection strength.
A PLMN typically offers the following services to a mobile subscriber:
The availability, quality and bandwidth of these services strongly depends on the particular technology used to implement a PLMN. | https://en.wikipedia.org/wiki/Public_land_mobile_network |
The public switched telephone network ( PSTN ) is the aggregate of the world's telephone networks that are operated by national, regional, or local telephony operators. It provides infrastructure and services for public telephony . The PSTN consists of telephone lines , fiber-optic cables , microwave transmission links, cellular networks , communications satellites , and undersea telephone cables interconnected by switching centers , such as central offices , network tandems , and international gateways, which allow telephone users to communicate with each other.
Originally a network of fixed-line analog telephone systems, the PSTN is now predominantly digital in its core network and includes terrestrial cellular , satellite , and landline systems. These interconnected networks enable global communication, allowing calls to be made to and from nearly any telephone worldwide. [ 1 ] Many of these networks are progressively transitioning to Internet Protocol to carry their telephony traffic.
The technical operation of the PSTN adheres to the standards internationally promulgated by the ITU-T . These standards have their origins in the development of local telephone networks, primarily in the Bell System in the United States and in the networks of European ITU members. The E.164 standard provides a single global address space in the form of telephone numbers . The combination of the interconnected networks and a global telephone numbering plan allows telephones around the world to connect with each other. [ 2 ]
Commercialization of the telephone began shortly after its invention, with instruments operated in pairs for private use between two locations. Users who wanted to communicate with persons at multiple locations had as many telephones as necessary for the purpose. Alerting another user of the desire to establish a telephone call was accomplished by whistling loudly into the transmitter until the other party heard the alert. Bells were soon added to stations for signaling .
Later telephone systems took advantage of the exchange principle already employed in telegraph networks. Each telephone was wired to a telephone exchange established for a town or area. For communication outside this exchange area, trunks were installed between exchanges. Networks were designed in a hierarchical manner until they spanned cities, states, and international distances.
Automation introduced pulse dialing between the telephone and the exchange so that each subscriber could directly dial another subscriber connected to the same exchange, but long-distance calling across multiple exchanges required manual switching by operators. Later, more sophisticated address signaling, including multi-frequency signaling methods, enabled direct-dialed long-distance calls by subscribers, culminating in the Signalling System 7 (SS7) network that controlled calls between most exchanges by the end of the 20th century.
The growth of the PSTN was enabled by teletraffic engineering techniques to deliver quality of service (QoS) in the network. The work of A. K. Erlang established the mathematical foundations of methods required to determine the capacity requirements and configuration of equipment and the number of personnel required to deliver a specific level of service.
In the 1970s, the telecommunications industry began implementing packet-switched network data services using the X.25 protocol transported over much of the end-to-end equipment as was already in use in the PSTN. These became known as public data networks , or public switched data networks.
In the 1980s, the industry began planning for digital services assuming they would follow much the same pattern as voice services and conceived end-to-end circuit-switched services, known as the Broadband Integrated Services Digital Network (B-ISDN). The B-ISDN vision was overtaken by the disruptive technology of the Internet .
At the turn of the 21st century, the oldest parts of the telephone network still used analog baseband technology to deliver audio-frequency connectivity over the last mile to the end-user. However, digital technologies such as DSL , ISDN , FTTx , and cable modems were progressively deployed in this portion of the network, primarily to provide high-speed Internet access.
As of 2023 [update] , operators worldwide are in the process of retiring support for both last-mile analog telephony and ISDN, and transitioning voice service to Voice over IP via Internet access delivered either via DSL , cable modems or fiber-to-the-premises , eliminating the expense and complexity of running two separate technology infrastructures for PSTN and Internet access.
Several large private telephone networks are not linked to the PSTN, usually for military purposes. There are also private networks run by large companies that are linked to the PSTN only through limited gateways , such as a large private branch exchange (PBX).
The task of building the networks and selling services to customers fell to the network operators . The first company to be incorporated to provide PSTN services was the Bell Telephone Company in the United States.
In some countries, however, the job of providing telephone networks fell to government as the investment required was very large and the provision of telephone service was increasingly becoming an essential public utility . For example, the General Post Office in the United Kingdom brought together a number of private companies to form a single nationalized company . In more recent decades, these state monopolies were broken up or sold off through privatization . [ 3 ] [ 4 ] [ 5 ]
The architecture of the PSTN evolved over time to support an increasing number of subscribers, call volume, destinations, features, and technologies. The principles developed in North America and in Europe were adopted by other nations, with adaptations for local markets.
A key concept was that the telephone exchanges are arranged into hierarchies, so that if a call cannot be handled in a local cluster, it is passed to one higher up for onward routing. This reduced the number of connecting trunks required between operators over long distances, and also kept local traffic separate. Modern technologies have brought simplifications
Most automated telephone exchanges use digital switching rather than mechanical or analog switching. The trunks connecting the exchanges are also digital, called circuits or channels. However analog two-wire circuits are still used to connect the last mile from the exchange to the telephone in the home (also called the local loop ). To carry a typical phone call from a calling party to a called party , the analog audio signal is digitized at an 8 kHz sample rate with 8-bit resolution using a special type of nonlinear pulse-code modulation known as G.711 . The call is then transmitted from one end to another via telephone exchanges. The call is switched using a call set up protocol (usually ISUP ) between the telephone exchanges under an overall routing strategy .
The call is carried over the PSTN using a 64 kbit/s channel, originally designed by Bell Labs . The name given to this channel is Digital Signal 0 (DS0). The DS0 circuit is the basic granularity of circuit switching in a telephone exchange. A DS0 is also known as a timeslot because DS0s are aggregated in time-division multiplexing (TDM) equipment to form higher capacity communication links.
A Digital Signal 1 (DS1) circuit carries 24 DS0s on a North American or Japanese T-carrier (T1) line, or 32 DS0s (30 for calls plus two for framing and signaling) on an E-carrier (E1) line used in most other countries. In modern networks, the multiplexing function is moved as close to the end user as possible, usually into cabinets at the roadside in residential areas, or into large business premises.
These aggregated circuits are conveyed from the initial multiplexer to the exchange over a set of equipment collectively known as the access network . The access network and inter-exchange transport use synchronous optical transmission, for example, SONET and Synchronous Digital Hierarchy (SDH) technologies, although some parts still use the older PDH technology.
The access network defines a number of reference points. Most of these are of interest mainly to ISDN but one, the V reference point , is of more general interest. This is the reference point between a primary multiplexer and an exchange. The protocols at this reference point were standardized in ETSI areas as the V5 interface .
Voice quality in PSTN networks was used as a benchmark for the development of the Telecommunications Industry Association 's TIA-TSB-116 standard on voice-quality recommendations for IP telephony, to determine acceptable levels of audio latency and echo. [ 6 ]
In most countries, the government has a regulatory agency dedicated to provisioning of PSTN services. The agency regulate technical standards, legal requirements, and set service tasks may be for example to ensure that end customers are not over-charged for services where monopolies may exist. These regulatory agencies may also regulate the prices charged between the operators to carry each other's traffic .
In the United Kingdom, the copper POTS and ISDN-based PSTN is being retired in favour of SIP telephony , with an original completion date of December 2025, although this has now been put back to January 2027. See United Kingdom PSTN switch-off . Voice telephony will continue to follow the E.163 and E.164 standards, as with current mobile telephony, with the interface to end-users remaining the same.
Several other European countries, including Estonia, Germany, Iceland, the Netherlands, Spain and Portugal, have also retired, or are planning to retire, their PSTN networks. [ 7 ] [ 8 ] [ 9 ]
Countries in other continents are also performing similar transitions. [ 8 ] | https://en.wikipedia.org/wiki/Public_switched_telephone_network |
Public works are a broad category of infrastructure projects, financed and procured by a government body for recreational, employment, and health and safety uses in the greater community . They include public buildings ( municipal buildings , schools , and hospitals ), transport infrastructure ( roads , railroads , bridges , pipelines , canals , ports , and airports ), public spaces ( public squares , parks , and beaches ), public services ( water supply and treatment , sewage treatment , electrical grid , and dams ), environmental protection ( drinking water protection , soil erosion reduction, wildlife habitat preservation ,
preservation and restoration of forests and wetlands) and other, usually long-term, physical assets and facilities . Though often interchangeable with public infrastructure and public capital , public works does not necessarily carry an economic component, thereby being a broader term. Construction may be undertaken either by directly employed labour or by a private operator.
Public works has been encouraged since antiquity. The Roman emperor Nero encouraged the construction of various infrastructure projects during widespread deflation . [ 1 ]
Public works is a multi-dimensional concept in economics and politics , touching on multiple arenas including: recreation (parks, beaches, trails), aesthetics (trees, green space), economy (goods and people movement, energy), law (police and courts), and neighborhood (community centers, social services buildings). It represents any constructed object that augments a nation's physical infrastructure.
Municipal infrastructure, urban infrastructure , and rural development usually represent the same concept but imply either large cities or developing nations ' concerns respectively. The terms public infrastructure or critical infrastructure are at times used interchangeably. However, critical infrastructure includes public works (dams, waste water systems, bridges, etc.) as well as facilities like hospitals, banks, and telecommunications systems and views them from a national security viewpoint and the impact on the community that the loss of such facilities would entail.
Furthermore, the term public works has recently been expanded to include digital public infrastructure projects. For example, in the United States , the first nationwide digital public works project is an effort to create an open source software platform for e-voting (created and managed by the Open Source Digital Voting Foundation). [ 2 ]
Reflecting increased concern with sustainability , urban ecology and quality of life , efforts to move towards sustainable municipal infrastructure are common in developed nations , especially in the European Union and Canada (where the FCM InfraGuide provides an officially mandated best practice exchange to move municipalities in that direction).
A public employment programme or public works programme is the provision of employment by the creation of predominantly public goods at a prescribed wage for those unable to find alternative employment. This functions as a form of social safety net . Public works programmes are activities which entail the payment of a wage (in cash or in kind) by the state, or by an Agent (or cash-for work/CFW). One particular form of public works, that of offering a short-term period of employment, has come to dominate practice, particularly in regions such as Sub-Saharan Africa . Applied in the short term, this is appropriate as a response to transient shocks and acute labour market crises. [ 3 ]
Investing in public works projects in order to stimulate the general economy has been a popular policy measure since the economic crisis of the 1930s. Spearheaded by U.S. Secretary of Labor Frances Perkins , the first female Cabinet member in the United States, the New Deal resulted in the creation of programs such as the Civilian Conservation Corps , Public Works Administration , and the Works Progress Administration , among others, all of which created public goods through labor and infrastructure investments. [ 4 ]
More recent examples are the 2008–2009 Chinese economic stimulus program , India's National Infrastructure Pipeline of 2020, the 2008 European Union stimulus plan , and the American Recovery and Reinvestment Act of 2009 .
While it is argued that capital investment in public works can be used to reduce unemployment, opponents of internal improvement programs argue that such projects should be undertaken by the private sector , not the public sector , because public works projects are often inefficient and costly to taxpayers. Further, some argue that public works, when used excessively by a government, are characteristic of socialism and other public or collectivist forms of government because of their 'tax and spend' policies to achieve long-term economic improvement. However, in the private sector, entrepreneurs bear their own losses [ citation needed ] and so private-sector firms are generally unwilling to undertake projects that could result in losses or would not develop a revenue stream. Governments will invest in public works because of the overall benefit to society when there is a lack of private sector benefit (a project that does generate revenue) or the risk is too great for a private company to accept on its own.
According to research conducted at the Aalborg University , 86% of public works projects end up with cost overruns. Some findings of the research were the following:
Generally, contracts awarded by public tenders include a provision for unexpected expenses (cost overruns), which typically amount to 10% of the value of the contract. This money is spent during the course of the project only if the construction managers judge that it is necessary, and the expenditure must typically be justified in writing.
The dictionary definition of public works at Wiktionary | https://en.wikipedia.org/wiki/Public_works |
PubMeth is a database that contains information about DNA hypermethylation in cancer. It can be queried either by searching a list of genes, or cancer (sub)types.
It was created at the lab for bioinformatics and computational genomics in the Department of Molecular Biotechnology, Faculty of Bioscience Engineering at Ghent University , Belgium .
It was published in Nucleic Acids Research [ 1 ]
This article related to health informatics is a stub . You can help Wikipedia by expanding it .
This database -related article is a stub . You can help Wikipedia by expanding it .
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pubmeth |
Puddling is the process of converting pig iron to bar (wrought) iron in a coal fired reverberatory furnace . It was developed in England during the 1780s. The molten pig iron was stirred in a reverberatory furnace, in an oxidizing environment to burn the carbon, resulting in wrought iron . [ 1 ] It was one of the most important processes for making the first appreciable volumes of valuable and useful bar iron (malleable wrought iron) without the use of charcoal . Eventually, the furnace would be used to make small quantities of specialty steels .
Though it was not the first process to produce bar iron without charcoal, puddling was by far the most successful, and replaced the earlier potting and stamping processes, as well as the much older charcoal finery and bloomery processes. This enabled a great expansion of iron production to take place in Great Britain, and shortly afterwards, in North America. That expansion constitutes the beginnings of the Industrial Revolution so far as the iron industry is concerned. Most 19th century applications of wrought iron , including the Eiffel Tower , bridges, and the original framework of the Statue of Liberty , used puddled iron.
Modern puddling was one of several processes developed in the second half of the 18th century in Great Britain for producing bar iron from pig iron without the use of charcoal. It gradually replaced the earlier charcoal-fueled process, conducted in a finery forge .
Pig iron contains much free carbon and is brittle . Before it can be used, and before it can be worked by a blacksmith , it must be converted to a more malleable form as bar iron, the early stage of wrought iron .
Abraham Darby 's successful use of coke for his blast furnace at Coalbrookdale in 1709 [ 2 ] reduced the price of iron, but this coke-fuelled pig iron was not initially accepted as it could not be converted to bar iron by the existing methods. [ 3 ] Sulfur impurities from the coke made it ' red short ', or brittle when heated, and so the finery process was unworkable for it. It was not until around 1750, when steam powered blowing increased furnace temperatures enough to allow sufficient lime to be added to remove the sulfur , that coke pig iron began to be adopted. [ 4 ] Also, better processes were developed to refine it. [ 3 ]
Abraham Darby II , son of the blast furnace innovator, managed to convert pig iron to bar iron in 1749, but no details are known of his process. [ 5 ] The Cranage brothers , also working alongside the River Severn , allegedly achieved this experimentally by using a coal-fired reverbatory furnace , in which the iron and the sulphurous coal could be kept separate but it was never used commercially. [ 5 ] They were the first to hypothesise that iron could be converted from pig iron to bar iron by the action of heat alone. Although they were unaware of the necessary effects of the oxygen supplied by the air , they had at least abandoned the previous misapprehension that mixture with materials from the fuel were needed. Their experiments were successful and they were granted patent Nº851 in 1766, but no commercial adoption seems to have been made of their process.
In 1783, Peter Onions at Dowlais constructed a larger reverbatory furnace. [ 5 ] He began successful commercial puddling with this and was granted patent Nº1370. The furnace was improved by Henry Cort at Fontley in Hampshire in 1783–84 and patented in 1784. Cort added dampers to the chimney, avoiding some of the risk of overheating and 'burning' the iron. [ 5 ] Cort's process consisted of stirring molten pig iron in a reverberatory furnace in an oxidising atmosphere, thus decarburising it. When the iron "came to nature", that is, to a pasty consistency, it was gathered into a puddled ball, shingled , and rolled (as described below). This application of grooved rollers to the rolling mill , to roll narrow bars, was also Cort's adoption of existing rolling mills on the Continent. [ 6 ] Cort's efforts to license this process were unsuccessful as it only worked with charcoal smelted pig iron. Modifications were made by Richard Crawshay at his ironworks at Cyfarthfa in Merthyr Tydfil, which incorporated an initial refining process developed at their neighbours at Dowlais.
Ninety years after Cort's invention, an American labor newspaper recalled the advantages of his system:
"When iron is simply melted and run into any mold, its texture is granular, and it is so brittle as to be quite unreliable for any use requiring much tensile strength . The process of puddling consisted in stirring the molten iron run out in a puddle, and had the effect of so changing its anotomic arrangement as to render the process of rolling more efficacious." [ 7 ]
Cort's process (as patented) only worked for white cast iron , not grey cast iron , which was the usual feedstock for forges of the period. This problem was resolved probably at Merthyr Tydfil by combining puddling with one element of a slightly earlier process. This involved another kind of hearth known as a 'refinery' or 'running out fire'. [ 8 ] The pig iron was melted in this and run out into a trough. The slag separated, and floated on the molten iron, and was removed by lowering a dam at the end of the trough. The effect of this process was to desiliconise the metal, leaving a white brittle metal, known as 'finers metal'. This was the ideal material to charge to the puddling furnace. This version of the process was known as 'dry puddling' and continued in use in some places as late as 1890.
An additional development in refining gray iron was known as 'wet puddling', also known as 'boiling' or 'pig boiling'. This was invented by a puddler named Joseph Hall at Tipton . He began adding scrap iron to the charge. Later, he tried adding iron scale (in effect, iron oxides such as FeO , Fe 2 O 3 , or Fe 3 O 4 ). The result was spectacular in that the furnace boiled violently, producing carbon monoxide bubbles. This was due to a chemical reaction between the iron oxides in the scale and the carbon dissolved in the pig iron: C + Fe 2 O 3 → CO + 2 FeO . To his surprise, the resultant puddle ball produced good iron.
One big problem with puddling was that up to 15% of the iron was drawn off with the slag because sand was used for the bed. Hall substituted roasted tap cinder for the bed, which cut this waste to 8%, declining to 5% by the end of the century. [ 9 ]
Hall subsequently became a partner in establishing the Bloomfield Iron Works at Tipton in 1830, the firm becoming Bradley, Barrows and Hall from 1834. This is the version of the process most commonly used in the mid to late 19th century. Wet puddling had the advantage that it was much more efficient than dry puddling (or any earlier process). The best yield of iron achievable from dry puddling is one ton of iron from 1.3 tons of pig iron (a yield of 77%), but the yield from wet puddling was nearly 100%.
The production of mild steel in the puddling furnace was achieved circa 1850 [ citation needed ] in Westphalia , Germany and was patented in Great Britain on behalf of Lohage, Bremme and Lehrkind. It worked only with pig iron made from certain kinds of ore. The cast iron had to be melted quickly and the slag to be rich in manganese . When the metal came to nature, it had to be removed quickly and shingled before further decarburization occurred. The process was taken up at the Low Moor Ironworks at Bradford in Yorkshire ( England ) in 1851 and in the Loire valley in France in 1855. It was widely used.
The puddling process began to be displaced with the introduction of the Bessemer process , which produced steel. This could be converted into wrought iron using the Aston process for a fraction of the cost and time. For comparison, an average size charge for a puddling furnace was 800–900 lb (360–410 kg) [ 10 ] while a Bessemer converter charge was 15 short tons (13,600 kg). The puddling process could not be scaled up, being limited by the amount that the puddler could handle. It could only be expanded by building more furnaces.
The process begins by preparing the puddling furnace. This involves bringing the furnace to a low temperature and then fettling it. Fettling is the process of painting the grate and walls around it with iron oxides, typically hematite ; [ 11 ] this acts as a protective coating keeping the melted metal from burning through the furnace. Sometimes finely pounded cinder was used instead of hematite. In this case the furnace must be heated for 4–5 hours to melt the cinder and then cooled before charging.
Either white cast iron or refined iron is then placed in hearth of the furnace, a process known as charging . For wet puddling, scrap iron and/or iron oxide is also charged. This mixture is then heated until the top melts, allowing for the oxides to begin mixing; this usually takes 30 minutes. This mixture is subjected to a strong current of air and stirred by long bars with hooks on one end, called puddling bars or rabbles , [ 10 ] [ 12 ] through doors in the furnace. [ 13 ] This helps the iron-III (the Fe 3+ species acting as an oxidiser ) from the oxides to react with impurities in the pig iron, notably silicon , manganese (to form slag) and to some degree sulfur and phosphorus , which form gases that escape with the exhaust of the furnace.
More fuel is then added and the temperature is raised. The iron completely melts and the carbon starts to burn off. When wet puddling, the formation of carbon monoxide (CO) and carbon dioxide (CO 2 ) due to reactions with the added iron oxide will cause bubbles to form that cause the mass to appear to boil. This process causes the slag to puff up on top, giving the rabbler a visual indication of the progress of the combustion. As the carbon burns off, the melting temperature of the mixture rises from 1,150 to 1,540 °C (2,100 to 2,800 °F), [ 14 ] [ 15 ] so the furnace has to be continually fed during this process. The melting point increases since the carbon atoms within the mixture act as a solute in solution which lowers the melting point of the iron mixture (like road salt on ice).
Working as a two-man crew, a puddler and helper could produce about 1500 kg of iron in a 12-hour shift. [ 16 ] The strenuous labour, heat and fumes caused puddlers to have a very short life expectancy, with most dying in their 30s. [ 17 ] Puddling was never able to be automated because the puddler had to sense when the balls had "come to nature".
In the late 1840s, the German chemist Franz Anton Lohage [ de ] developed a modification of the puddling process to produce not iron but steel at the Haspe Iron Works in Hagen ; it was subsequently commercialized in Germany, France and the UK in the 1850s, and puddled steel was the main raw material for Krupp cast steel even in the 1870s. [ 18 ] Before the development of the basic refractory lining (with magnesium oxide , MgO) and the wide-scale adoption of the Gilchrist–Thomas process ca. 1880 it complemented acidic Bessemer converters (with a refractory material made of SiO 2 ) and open hearths because unlike them, the puddling furnace could utilize phosphorous ores abundant in Continental Europe. [ 19 ]
The puddling furnace is a metalmaking technology used to create wrought iron or steel from the pig iron produced in a blast furnace . The furnace is constructed to pull the hot air over the iron without the fuel coming into direct contact with the iron, a system generally known as a reverberatory furnace or open hearth furnace . The major advantage of this system is keeping the impurities of the fuel separated from the charge.
The hearth is where the iron is charged, melted and puddled. The hearth's shape is usually elliptical; 1.5–1.8 m (4.9–5.9 ft) in length and 1–1.2 m (3.3–3.9 ft) wide. If the furnace is designed to puddle white iron then the hearth depth is never more than 50 cm (20 in). If the furnace is designed to boil gray iron then the average hearth depth is 50–75 cm (20–30 in).
The fireplace, where the fuel is burned, used a cast iron grate which varied in size depending on the fuel used. If bituminous coal is used then an average grate size is 60 cm × 90 cm (2.0 ft × 3.0 ft) and is loaded with 25–30 cm (9.8–11.8 in) of coal. If anthracite coal is used then the grate is 1.5 m × 1.2 m (4.9 ft × 3.9 ft) and is loaded with 50–75 cm (20–30 in) of coal. Due to the great heat required to melt the charge the grate had to be cooled, lest it melt with the charge. This was done by running a constant flow of cool air on it, or by throwing water on the bottom of the grate.
A double puddling furnace is similar to a single puddling furnace, with the major difference being there are two work doors allowing two puddlers to work the furnace at the same time. The biggest advantage of this setup is that it produces twice as much wrought iron. It is also more economical and fuel efficient compared to a single furnace. | https://en.wikipedia.org/wiki/Puddling_(metallurgy) |
In organophosphorus chemistry , the Pudovik reaction is a method for preparing α-aminomethylphosphonates . Under basic conditions, the phosphorus–hydrogen bond of a dialkylphosphite , (RO) 2 P(O)H, adds across the carbon–nitrogen double bond of an imine (a hydrophosphonylation reaction). [ 1 ] The reaction is closely related to the three-component Kabachnik–Fields reaction , where an amine , phosphite, and an organic carbonyl compound are condensed, [ 2 ] which was reported independently by Martin Kabachnik [ 3 ] and Ellis Fields [ 4 ] in 1952. In the Pudovik reaction, a generic imine, RCH=NR', would react with a phosphorous reagent like diethylphosphite as follows: [ 5 ]
In addition to the Lewis-acid catalyzed Pudovik reaction, the reaction may be carried out in the presence of chiral amine bases. Catalytic amounts of quinine , for instance, promote the enantioselective Pudovik reaction of aryl aldehydes. [ 6 ] Catalytic, enantioselective variants of the Pudovik reaction have been developed. [ 7 ] | https://en.wikipedia.org/wiki/Pudovik_reaction |
In mathematics , Pugh's closing lemma is a result that links periodic orbit solutions of differential equations to chaotic behaviour . It can be formally stated as follows:
Pugh's closing lemma means, for example, that any chaotic set in a bounded continuous dynamical system corresponds to a periodic orbit in a different but closely related dynamical system. As such, an open set of conditions on a bounded continuous dynamical system that rules out periodic behaviour also implies that the system cannot behave chaotically; this is the basis of some autonomous convergence theorems .
This article incorporates material from Pugh's closing lemma on PlanetMath , which is licensed under the Creative Commons Attribution/Share-Alike License . | https://en.wikipedia.org/wiki/Pugh's_closing_lemma |
A pugmill , pug mill , or commonly just pug , is a machine in which clay or other materials are extruded in a plastic state or a similar machine for the trituration of ore. [ 1 ] Industrial applications are found in pottery , bricks , cement and some parts of the concrete and asphalt mixing processes. A pugmill may be a fast continuous mixer. A continuous pugmill can achieve a thoroughly mixed, homogeneous mixture in a few seconds, and the right machines can be matched to the right application by taking into account the factors of agitation, drive assembly, inlet, discharge, cost and maintenance. [ 2 ] Mixing materials at optimum moisture content requires the forced mixing action of the pugmill paddles, while soupy materials might be mixed in a drum mixer. A typical pugmill consists of a horizontal boxlike chamber with a top inlet and a bottom discharge at the other end, 2 shafts with opposing paddles, and a drive assembly. Some of the factors affecting mixing and residence time are the number and the size of the paddles, paddle swing arc, overlap of left and right swing arc, size of mixing chamber, length of pugmill floor, and material being mixed.
Road Base - Dense well-graded aggregate, uniformly mixed, wetted, and densely compacted for building the foundation under a pavement.
Lime Addition to asphalt – Lime may be added to the cold feed of an asphalt plant to strengthen the binding properties of the asphalt.
Flyash Conditioning – Wetting fly ash in a pugmill to stabilize the ash so that it won’t create dust. Some flyashes have cementitious properties when wetted and can be used to stabilize other materials.
Waste stabilization – various waste streams are remediated with pugmills forcing the mixing of the wastes with remediation agents.
Roller-compacted concrete – (RCC) or rolled concrete is a special blend of concrete that has the same ingredients as conventional concrete but in different ratios. It has cement, water, and aggregates, but RCC is much drier and essentially has no slump. RCC is placed in a manner similar to paving, often by dump trucks or conveyors, spread by bulldozers or special modified asphalt pavers. After placement it is compacted by vibratory rollers.
The “stiff” nature of RCC may require a paddle type pugmill to force the materials to mix completely and discharge easily.
Ceramics pug mills, or commonly just "pugs", are not used to grind or mix, rather they extrude clay bodies prior to shaping processes. Some can be fitted with a vacuum system that ensures the extruded clay bodies have no entrapped air. According to the 1913 edition of Webster's Dictionary , a clay pug mill typically consists of an upright shaft armed with projecting knives, which is caused to revolve in a hollow cylinder, tub, or vat, in which the clay body is placed.
Pugmills that run intermittently are used in the kaolin mining industry to mix certain grades of kaolin clay with water. | https://en.wikipedia.org/wiki/Pugmill |
The Pugwash Conferences on Science and World Affairs is an international organization that brings together scholars and public figures to work toward reducing the danger of armed conflict and to seek solutions to global security threats. It was founded in 1957 by Joseph Rotblat and Bertrand Russell in Pugwash, Nova Scotia , Canada , following the release of the Russell–Einstein Manifesto in 1955.
Rotblat and the Pugwash Conference jointly won the Nobel Peace Prize in 1995 for their efforts on nuclear disarmament . [ 1 ] [ note 1 ] International Student/Young Pugwash groups have existed since founder Cyrus Eaton 's death in 1979.
The Russell–Einstein Manifesto , released July 9, 1955, [ 2 ] [ 3 ] [ 4 ] called for a conference for scientists to assess the dangers of weapons of mass destruction (then only considered to be nuclear weapons ). Cyrus Eaton , an industrialist and philanthropist, offered on July 13 to finance and host the conference in the town of his birth, Pugwash, Nova Scotia . This was not taken up at the time because a meeting was planned for India, at the invitation of Prime Minister Jawaharlal Nehru . With the outbreak of the Suez Crisis the Indian conference was postponed. Aristotle Onassis offered to finance a meeting in Monaco instead, but this was rejected. Eaton's former invitation was taken up.
The first conference was held at what became known as Thinkers' Lodge in July 1957 in Pugwash, Nova Scotia . [ 5 ] Twenty-two scientists attended the first conference: [ citation needed ]
Cyrus Eaton, Eric Burhop , Ruth Adams, Anne Kinder Jones, and Vladimir Pavlichenko also were present. Many others were unable to attend, including co-founder Bertrand Russell , for health reasons. [ citation needed ] From the Soviet Union, Mikhail Ilyich Bruk ( Russian : Михаил Ильич Брук ; 1923 Moscow – 2009 Jurmala ) attended as an English-Russian technical translator. Later, Armand Hammer stated, "Mike's KGB ." [ 6 ]
Pugwash's "main objective is the elimination of all weapons of mass destruction (nuclear, chemical and biological) and of war as a social institution to settle international disputes. To that extent, peaceful resolution of conflicts through dialogue and mutual understanding is an essential part of Pugwash activities, that is particularly relevant when and where nuclear weapons and other weapons of mass destruction are deployed or could be used." [ 7 ]
"The various Pugwash activities (general conferences, workshops, study groups, consultations and special projects) provide a channel of communication between scientists, scholars, and individuals experienced in government, diplomacy, and the military for in-depth discussion and analysis of the problems and opportunities at the intersection of science and world affairs. To ensure a free and frank exchange of views, conducive to the emergence of original ideas and an effective communication between different or antagonistic governments, countries and groups, Pugwash meetings as a rule are held in private. This is the main modus operandi of Pugwash. In addition to influencing governments by the transmission of the results of these discussions and meetings, Pugwash also may seek to make an impact on the scientific community and on public opinion through the holding of special types of meetings and through its publications." [ 7 ]
Officers include the president and secretary-general. Formal governance is provided by the Pugwash Council, which serves for five years. There is also an executive committee that assists the secretary-general. Jayantha Dhanapala is the current president. Paolo Cotta-Ramusino is the current Secretary General.
The four Pugwash offices, in Rome (international secretariat), London , Geneva , and Washington D.C. , provide support for Pugwash activities and serve as liaisons to the United Nations and other international organizations.
There are approximately fifty national Pugwash groups, organized as independent entities and often supported or administered by national academies of science.
The International Student/Young Pugwash groups works with, but are independent from, the international Pugwash group.
Pugwash's first fifteen years coincided with the Berlin Crisis , the Cuban Missile Crisis , the Warsaw Pact invasion of Czechoslovakia , and the Vietnam War . Pugwash played a useful role in opening communication channels during a time of otherwise-strained official and unofficial relations. In 1965, addressing a meeting at UNESCO House in Paris, Robert Oppenheimer gave his tribute to Albert Einstein for the Pugwash movement. Oppenheimer confidently asserted: "I know it to be true that it [averting the disaster of the arms race] had an essential part to play in the Treaty of Moscow, the limited test-ban treaty, which is a tentative, but to me very precious, declaration that reason might still prevail." [ 8 ]
It provided background work to the Partial Test Ban Treaty (1963), the Non-Proliferation Treaty (1968), the Anti-Ballistic Missile Treaty (1972), the Biological Weapons Convention (1972), and the Chemical Weapons Convention (1993). Former US Secretary of Defense Robert McNamara has credited a backchannel Pugwash initiative (code named PENNSYLVANIA) with laying the groundwork for the negotiations that ended the Vietnam War. [ 9 ] Mikhail Gorbachev admitted the influence of the organisation on him when he was leader of the Soviet Union . [ 10 ] In addition, Pugwash has been credited with being a groundbreaking and innovative "transnational" organization [ 11 ] and a leading example of the effectiveness of Track II diplomacy .
During the Cold War , it was claimed that the Pugwash Conference became a front conference for the Soviet Union, whose agents often managed to weaken Pugwash critique of USSR and instead concentrate on blaming the United States and the West. [ 12 ] In 1980, the House Permanent Select Committee on Intelligence received a report that the Pugwash Conference was used by Soviet delegates to promote Soviet propaganda. Joseph Rotblat said in his 1998 Bertrand Russell Peace Lecture that there were a few participants in the conferences from the Soviet Union "who were obviously sent to push the party line, but the majority were genuine scientists and behaved as such". [ 13 ]
Following the end of the Cold War, the traditional Pugwash focus on decreasing the salience of nuclear weapons and promoting a world free of nuclear weapons and other weapons of mass destruction addresses the following issue areas: [ 14 ]
The Pugwash movement has also been concerned with environmental issues and as a result of its 1988 meeting in Dagomys it issued the Dagomys Declaration on Environmental Degradation ( [ 15 ] ).
In 1995, fifty years after the bombing of Nagasaki and Hiroshima , and forty years after the signing of the Russell–Einstein Manifesto, the Pugwash Conferences and Joseph Rotblat were awarded the Nobel Peace Prize jointly "for their efforts to diminish the part played by nuclear arms in international politics and, in the longer run, to eliminate such arms." The Norwegian Nobel committee hoped that awarding the prize to Rotblat and Pugwash would "encourage world leaders to intensify their efforts to rid the world of nuclear weapons." In his acceptance speech, Rotblat quoted a key phrase from the Manifesto: "Remember your humanity."
From the 1965 Pugwash conference came a recommendation to establish the International Foundation for Science "in order to address the stultifying conditions under which younger faculty members in the universities of developing countries were attempting to do research". [ 16 ] The organization gives grants to early-career scientists in low-income countries for work on local water resources and biology. [ 16 ]
As of 2024, 14 individuals have served as Presidents of the Pugwash Conferences.
The Pugwash Conference itself does not have formal membership (although national organisations do [ 21 ] [ 22 ] ). All participants take part in their individual capacities and not as representatives of any organization, institution or government. Anyone who has attended a meeting is considered a "Pugwashite". There are more than 3,500 "Pugwashites" worldwide.
As the birthplace of the Pugwash movement, the Thinkers' Lodge was designated a National Historic Site of Canada in 2008. [ 23 ]
The Jubilee 62nd Pugwash Conference devoted to nuclear disarmament was held in Astana , the capital of Kazakhstan , in 2017. [ 24 ] The conference celebrated the 60th anniversary of the first Pugwash Conference, held in Pugwash, Nova Scotia in 1957. [ 25 ] The theme of the conference was "Confronting New Nuclear Dangers." [ 26 ] The conference agenda focused on strengthening the nuclear test ban and combating terrorism.
The Astana conference working groups included: [ 26 ] | https://en.wikipedia.org/wiki/Pugwash_Conferences_on_Science_and_World_Affairs |
The Pujiang line ( simplified Chinese : 浦江线 ; traditional Chinese : 浦江線 ; pinyin : Pǔjiāng Xiàn ) is an automated, driverless, rubber-tired Shanghai Metro line in the town of Pujiang in the Shanghainese district of Minhang . It was originally conceived as phase 3 of Shanghai Metro line 8 , but afterwards was constructed as a separate line, connecting with line 8 at its southern terminus, Shendu Highway . [ 4 ] The line opened for passenger trial operations on March 31, 2018. [ 5 ] [ 6 ] [ 7 ] It is the first automated, driverless people mover line in the Shanghai Metro , and has 6 stations with a total length of 6.689 kilometres (4.156 mi). [ 4 ] [ 6 ] The people mover was expected to carry 73,000 passengers a day. [ 8 ] The line is colored gray on system maps.
The line is operated by Shanghai Keolis Public Transport Operation & Management Co. Ltd. ( Chinese : 上海申凯公共交通运营管理有限公司 ), a joint venture owned by Keolis and Shanghai Shentong Metro Group for at least five years after opening. [ 9 ] [ 10 ]
There are no plans to extend the line.
The entire operation of the new line is remotely controlled from a central dispatch room. Trains operate using the Cityflo 650 communications-based train control (CBTC) from CRRC Puzhen Bombardier Transportation Systems Limited, a joint venture between Bombardier and CRRC Nanjing Puzhen Co., Ltd. The automatic trains had initially six staff members working at each APM station, but the operator hopes to reduce that to one or two. [ 14 ]
Pujiang line uses rubber-tyred Bombardier Innovia APM 300 trains. The trains have 4 cars each, totaling 51 metres (167 ft) in length, with capacity for 566 passengers per train. [ 2 ] There are large windows at each end of the train allowing passengers to look out the front and rear. The small trains with rubber tires running on concrete tracks allow for turning radii as tight as 22 m (72 ft) to be negotiated, compared to over 300 m (984 ft) for typical metro on steel rails. [ 14 ] On 13 January 2017, Bombardier delivered the first out of 44 autonomous people movers to Shanghai. [ 1 ] | https://en.wikipedia.org/wiki/Pujiang_line |
In mathematics , a pullback is either of two different, but related processes: precomposition and fiber-product. Its dual is a pushforward .
Precomposition with a function probably provides the most elementary notion of pullback: in simple terms, a function f {\displaystyle f} of a variable y , {\displaystyle y,} where y {\displaystyle y} itself is a function of another variable x , {\displaystyle x,} may be written as a function of x . {\displaystyle x.} This is the pullback of f {\displaystyle f} by the function y . {\displaystyle y.} f ( y ( x ) ) ≡ g ( x ) {\displaystyle f(y(x))\equiv g(x)}
It is such a fundamental process that it is often passed over without mention.
However, it is not just functions that can be "pulled back" in this sense. Pullbacks can be applied to many other objects such as differential forms and their cohomology classes ; see
The pullback bundle is an example that bridges the notion of a pullback as precomposition, and the notion of a pullback as a Cartesian square . In that example, the base space of a fiber bundle is pulled back, in the sense of precomposition, above. The fibers then travel along with the points in the base space at which they are anchored: the resulting new pullback bundle looks locally like a Cartesian product of the new base space, and the (unchanged) fiber. The pullback bundle then has two projections: one to the base space, the other to the fiber; the product of the two becomes coherent when treated as a fiber product .
The notion of pullback as a fiber-product ultimately leads to the very general idea of a categorical pullback, but it has important special cases: inverse image (and pullback) sheaves in algebraic geometry , and pullback bundles in algebraic topology and differential geometry.
See also:
When the pullback is studied as an operator acting on function spaces , it becomes a linear operator , and is known as the transpose or composition operator . Its adjoint is the push-forward, or, in the context of functional analysis , the transfer operator .
The relation between the two notions of pullback can perhaps best be illustrated by sections of fiber bundles: if s {\displaystyle s} is a section of a fiber bundle E {\displaystyle E} over N , {\displaystyle N,} and f : M → N , {\displaystyle f:M\to N,} then the pullback (precomposition) f ∗ s = s ∘ f {\displaystyle f^{*}s=s\circ f} of s with f {\displaystyle f} is a section of the pullback (fiber-product) bundle f ∗ E {\displaystyle f^{*}E} over M . {\displaystyle M.} | https://en.wikipedia.org/wiki/Pullback |
Let ϕ : M → N {\displaystyle \phi :M\to N} be a smooth map between smooth manifolds M {\displaystyle M} and N {\displaystyle N} . Then there is an associated linear map from the space of 1-forms on N {\displaystyle N} (the linear space of sections of the cotangent bundle ) to the space of 1-forms on M {\displaystyle M} . This linear map is known as the pullback (by ϕ {\displaystyle \phi } ), and is frequently denoted by ϕ ∗ {\displaystyle \phi ^{*}} . More generally, any covariant tensor field – in particular any differential form – on N {\displaystyle N} may be pulled back to M {\displaystyle M} using ϕ {\displaystyle \phi } .
When the map ϕ {\displaystyle \phi } is a diffeomorphism , then the pullback, together with the pushforward , can be used to transform any tensor field from N {\displaystyle N} to M {\displaystyle M} or vice versa. In particular, if ϕ {\displaystyle \phi } is a diffeomorphism between open subsets of R n {\displaystyle \mathbb {R} ^{n}} and R n {\displaystyle \mathbb {R} ^{n}} , viewed as a change of coordinates (perhaps between different charts on a manifold M {\displaystyle M} ), then the pullback and pushforward describe the transformation properties of covariant and contravariant tensors used in more traditional (coordinate dependent) approaches to the subject.
The idea behind the pullback is essentially the notion of precomposition of one function with another. However, by combining this idea in several different contexts, quite elaborate pullback operations can be constructed. This article begins with the simplest operations, then uses them to construct more sophisticated ones. Roughly speaking, the pullback mechanism (using precomposition) turns several constructions in differential geometry into contravariant functors .
Let ϕ : M → N {\displaystyle \phi :M\to N} be a smooth map between (smooth) manifolds M {\displaystyle M} and N {\displaystyle N} , and suppose f : N → R {\displaystyle f:N\to \mathbb {R} } is a smooth function on N {\displaystyle N} . Then the pullback of f {\displaystyle f} by ϕ {\displaystyle \phi } is the smooth function ϕ ∗ f {\displaystyle \phi ^{*}f} on M {\displaystyle M} defined by ( ϕ ∗ f ) ( x ) = f ( ϕ ( x ) ) {\displaystyle (\phi ^{*}f)(x)=f(\phi (x))} . Similarly, if f {\displaystyle f} is a smooth function on an open set U {\displaystyle U} in N {\displaystyle N} , then the same formula defines a smooth function on the open set ϕ − 1 ( U ) {\displaystyle \phi ^{-1}(U)} . (In the language of sheaves , pullback defines a morphism from the sheaf of smooth functions on N {\displaystyle N} to the direct image by ϕ {\displaystyle \phi } of the sheaf of smooth functions on M {\displaystyle M} .)
More generally, if f : N → A {\displaystyle f:N\to A} is a smooth map from N {\displaystyle N} to any other manifold A {\displaystyle A} , then ( ϕ ∗ f ) ( x ) = f ( ϕ ( x ) ) {\displaystyle (\phi ^{*}f)(x)=f(\phi (x))} is a smooth map from M {\displaystyle M} to A {\displaystyle A} .
If E {\displaystyle E} is a vector bundle (or indeed any fiber bundle ) over N {\displaystyle N} and ϕ : M → N {\displaystyle \phi :M\to N} is a smooth map, then the pullback bundle ϕ ∗ E {\displaystyle \phi ^{*}E} is a vector bundle (or fiber bundle ) over M {\displaystyle M} whose fiber over x {\displaystyle x} in M {\displaystyle M} is given by ( ϕ ∗ E ) x = E ϕ ( x ) {\displaystyle (\phi ^{*}E)_{x}=E_{\phi (x)}} .
In this situation, precomposition defines a pullback operation on sections of E {\displaystyle E} : if s {\displaystyle s} is a section of E {\displaystyle E} over N {\displaystyle N} , then the pullback section ϕ ∗ s = s ∘ ϕ {\displaystyle \phi ^{*}s=s\circ \phi } is a section of ϕ ∗ E {\displaystyle \phi ^{*}E} over M {\displaystyle M} .
Let Φ: V → W be a linear map between vector spaces V and W (i.e., Φ is an element of L ( V , W ) , also denoted Hom( V , W ) ), and let
F : W × W × ⋯ × W → R {\displaystyle F:W\times W\times \cdots \times W\rightarrow \mathbf {R} }
be a multilinear form on W (also known as a tensor – not to be confused with a tensor field – of rank (0, s ) , where s is the number of factors of W in the product). Then the pullback Φ ∗ F of F by Φ is a multilinear form on V defined by precomposing F with Φ. More precisely, given vectors v 1 , v 2 , ..., v s in V , Φ ∗ F is defined by the formula
( Φ ∗ F ) ( v 1 , v 2 , … , v s ) = F ( Φ ( v 1 ) , Φ ( v 2 ) , … , Φ ( v s ) ) , {\displaystyle (\Phi ^{*}F)(v_{1},v_{2},\ldots ,v_{s})=F(\Phi (v_{1}),\Phi (v_{2}),\ldots ,\Phi (v_{s})),}
which is a multilinear form on V . Hence Φ ∗ is a (linear) operator from multilinear forms on W to multilinear forms on V . As a special case, note that if F is a linear form (or (0,1)-tensor) on W , so that F is an element of W ∗ , the dual space of W , then Φ ∗ F is an element of V ∗ , and so pullback by Φ defines a linear map between dual spaces which acts in the opposite direction to the linear map Φ itself:
Φ : V → W , Φ ∗ : W ∗ → V ∗ . {\displaystyle \Phi \colon V\rightarrow W,\qquad \Phi ^{*}\colon W^{*}\rightarrow V^{*}.}
From a tensorial point of view, it is natural to try to extend the notion of pullback to tensors of arbitrary rank, i.e., to multilinear maps on W taking values in a tensor product of r copies of W , i.e., W ⊗ W ⊗ ⋅⋅⋅ ⊗ W . However, elements of such a tensor product do not pull back naturally: instead there is a pushforward operation from V ⊗ V ⊗ ⋅⋅⋅ ⊗ V to W ⊗ W ⊗ ⋅⋅⋅ ⊗ W given by
Φ ∗ ( v 1 ⊗ v 2 ⊗ ⋯ ⊗ v r ) = Φ ( v 1 ) ⊗ Φ ( v 2 ) ⊗ ⋯ ⊗ Φ ( v r ) . {\displaystyle \Phi _{*}(v_{1}\otimes v_{2}\otimes \cdots \otimes v_{r})=\Phi (v_{1})\otimes \Phi (v_{2})\otimes \cdots \otimes \Phi (v_{r}).}
Nevertheless, it follows from this that if Φ is invertible, pullback can be defined using pushforward by the inverse function Φ −1 . Combining these two constructions yields a pushforward operation, along an invertible linear map, for tensors of any rank ( r , s ) .
Let ϕ : M → N {\displaystyle \phi :M\to N} be a smooth map between smooth manifolds . Then the differential of ϕ {\displaystyle \phi } , written ϕ ∗ {\displaystyle \phi _{*}} , d ϕ {\displaystyle d\phi } , or D ϕ {\displaystyle D\phi } , is a vector bundle morphism (over M {\displaystyle M} ) from the tangent bundle T M {\displaystyle TM} of M {\displaystyle M} to the pullback bundle ϕ ∗ T N {\displaystyle \phi ^{*}TN} . The transpose of ϕ ∗ {\displaystyle \phi _{*}} is therefore a bundle map from ϕ ∗ T ∗ N {\displaystyle \phi ^{*}T^{*}N} to T ∗ M {\displaystyle T^{*}M} , the cotangent bundle of M {\displaystyle M} .
Now suppose that α {\displaystyle \alpha } is a section of T ∗ N {\displaystyle T^{*}N} (a 1-form on N {\displaystyle N} ), and precompose α {\displaystyle \alpha } with ϕ {\displaystyle \phi } to obtain a pullback section of ϕ ∗ T ∗ N {\displaystyle \phi ^{*}T^{*}N} . Applying the above bundle map (pointwise) to this section yields the pullback of α {\displaystyle \alpha } by ϕ {\displaystyle \phi } , which is the 1-form ϕ ∗ α {\displaystyle \phi ^{*}\alpha } on M {\displaystyle M} defined by ( ϕ ∗ α ) x ( X ) = α ϕ ( x ) ( d ϕ x ( X ) ) {\displaystyle (\phi ^{*}\alpha )_{x}(X)=\alpha _{\phi (x)}(d\phi _{x}(X))} for x {\displaystyle x} in M {\displaystyle M} and X {\displaystyle X} in T x M {\displaystyle T_{x}M} .
The construction of the previous section generalizes immediately to tensor bundles of rank ( 0 , s ) {\displaystyle (0,s)} for any natural number s {\displaystyle s} : a ( 0 , s ) {\displaystyle (0,s)} tensor field on a manifold N {\displaystyle N} is a section of the tensor bundle on N {\displaystyle N} whose fiber at y {\displaystyle y} in N {\displaystyle N} is the space of multilinear s {\displaystyle s} -forms F : T y N × ⋯ × T y N → R . {\displaystyle F:T_{y}N\times \cdots \times T_{y}N\to \mathbb {R} .} By taking ϕ {\displaystyle \phi } equal to the (pointwise) differential of a smooth map ϕ {\displaystyle \phi } from M {\displaystyle M} to N {\displaystyle N} , the pullback of multilinear forms can be combined with the pullback of sections to yield a pullback ( 0 , s ) {\displaystyle (0,s)} tensor field on M {\displaystyle M} . More precisely if S {\displaystyle S} is a ( 0 , s ) {\displaystyle (0,s)} -tensor field on N {\displaystyle N} , then the pullback of S {\displaystyle S} by ϕ {\displaystyle \phi } is the ( 0 , s ) {\displaystyle (0,s)} -tensor field ϕ ∗ S {\displaystyle \phi ^{*}S} on M {\displaystyle M} defined by ( ϕ ∗ S ) x ( X 1 , … , X s ) = S ϕ ( x ) ( d ϕ x ( X 1 ) , … , d ϕ x ( X s ) ) {\displaystyle (\phi ^{*}S)_{x}(X_{1},\ldots ,X_{s})=S_{\phi (x)}(d\phi _{x}(X_{1}),\ldots ,d\phi _{x}(X_{s}))} for x {\displaystyle x} in M {\displaystyle M} and X j {\displaystyle X_{j}} in T x M {\displaystyle T_{x}M} .
A particular important case of the pullback of covariant tensor fields is the pullback of differential forms . If α {\displaystyle \alpha } is a differential k {\displaystyle k} -form, i.e., a section of the exterior bundle Λ k ( T ∗ N ) {\displaystyle \Lambda ^{k}(T^{*}N)} of (fiberwise) alternating k {\displaystyle k} -forms on T N {\displaystyle TN} , then the pullback of α {\displaystyle \alpha } is the differential k {\displaystyle k} -form on M {\displaystyle M} defined by the same formula as in the previous section: ( ϕ ∗ α ) x ( X 1 , … , X k ) = α ϕ ( x ) ( d ϕ x ( X 1 ) , … , d ϕ x ( X k ) ) {\displaystyle (\phi ^{*}\alpha )_{x}(X_{1},\ldots ,X_{k})=\alpha _{\phi (x)}(d\phi _{x}(X_{1}),\ldots ,d\phi _{x}(X_{k}))} for x {\displaystyle x} in M {\displaystyle M} and X j {\displaystyle X_{j}} in T x M {\displaystyle T_{x}M} .
The pullback of differential forms has two properties which make it extremely useful.
When the map ϕ {\displaystyle \phi } between manifolds is a diffeomorphism , that is, it has a smooth inverse, then pullback can be defined for the vector fields as well as for 1-forms, and thus, by extension, for an arbitrary mixed tensor field on the manifold. The linear map Φ = d ϕ x ∈ GL ( T x M , T ϕ ( x ) N ) {\displaystyle \Phi =d\phi _{x}\in \operatorname {GL} \left(T_{x}M,T_{\phi (x)}N\right)}
can be inverted to give Φ − 1 = ( d ϕ x ) − 1 ∈ GL ( T ϕ ( x ) N , T x M ) . {\displaystyle \Phi ^{-1}=\left({d\phi _{x}}\right)^{-1}\in \operatorname {GL} \left(T_{\phi (x)}N,T_{x}M\right).}
A general mixed tensor field will then transform using Φ {\displaystyle \Phi } and Φ − 1 {\displaystyle \Phi ^{-1}} according to the tensor product decomposition of the tensor bundle into copies of T N {\displaystyle TN} and T ∗ N {\displaystyle T^{*}N} . When M = N {\displaystyle M=N} , then the pullback and the pushforward describe the transformation properties of a tensor on the manifold M {\displaystyle M} . In traditional terms, the pullback describes the transformation properties of the covariant indices of a tensor ; by contrast, the transformation of the contravariant indices is given by a pushforward .
The construction of the previous section has a representation-theoretic interpretation when ϕ {\displaystyle \phi } is a diffeomorphism from a manifold M {\displaystyle M} to itself. In this case the derivative d ϕ {\displaystyle d\phi } is a section of GM ( T M , ϕ ∗ T M ) {\displaystyle \operatorname {GM} (TM,\phi ^{*}TM)} . This induces a pullback action on sections of any bundle associated to the frame bundle GM ( m ) {\displaystyle \operatorname {GM} (m)} of M {\displaystyle M} by a representation of the general linear group GM ( m ) {\displaystyle \operatorname {GM} (m)} (where m = dim M {\displaystyle m=\dim M} ).
See Lie derivative . By applying the preceding ideas to the local 1-parameter group of diffeomorphisms defined by a vector field on M {\displaystyle M} , and differentiating with respect to the parameter, a notion of Lie derivative on any associated bundle is obtained.
If ∇ {\displaystyle \nabla } is a connection (or covariant derivative ) on a vector bundle E {\displaystyle E} over N {\displaystyle N} and ϕ {\displaystyle \phi } is a smooth map from M {\displaystyle M} to N {\displaystyle N} , then there is a pullback connection ϕ ∗ ∇ {\displaystyle \phi ^{*}\nabla } on ϕ ∗ E {\displaystyle \phi ^{*}E} over M {\displaystyle M} , determined uniquely by the condition that ( ϕ ∗ ∇ ) X ( ϕ ∗ s ) = ϕ ∗ ( ∇ d ϕ ( X ) s ) . {\displaystyle \left(\phi ^{*}\nabla \right)_{X}\left(\phi ^{*}s\right)=\phi ^{*}\left(\nabla _{d\phi (X)}s\right).} | https://en.wikipedia.org/wiki/Pullback_(differential_geometry) |
A pulley is a wheel on an axle or shaft enabling a taut cable or belt passing over the wheel to move and change direction, or transfer power between itself and a shaft. A sheave or pulley wheel is a pulley using an axle supported by a frame or shell ( block ) to guide a cable or exert force.
A pulley may have a groove or grooves between flanges around its circumference to locate the cable or belt. The drive element of a pulley system can be a rope , cable , belt, or chain .
The earliest evidence of pulleys dates back to Ancient Egypt in the Twelfth Dynasty (1991–1802 BCE) [ 1 ] and Mesopotamia in the early 2nd millennium BCE. [ 2 ] In Roman Egypt , Hero of Alexandria (c. 10–70 CE) identified the pulley as one of six simple machines used to lift weights. [ 3 ] Pulleys are assembled to form a block and tackle in order to provide mechanical advantage to apply large forces. Pulleys are also assembled as part of belt and chain drives in order to transmit power from one rotating shaft to another. [ 4 ] [ 5 ] Plutarch 's Parallel Lives recounts a scene where Archimedes proved the effectiveness of compound pulleys and the block-and-tackle system by using one to pull a fully laden ship towards him as if it was gliding through water. [ 6 ]
A block is a set of pulleys (wheels) assembled so that each pulley rotates independently from every other pulley. Two blocks with a rope attached to one of the blocks and threaded through the two sets of pulleys form a block and tackle . [ 8 ] [ 9 ]
A block and tackle is assembled so one block is attached to the fixed mounting point and the other is attached to the moving load. The ideal mechanical advantage of the block and tackle is equal to the number of sections of the rope that support the moving block.
In the diagram on the right, the ideal mechanical advantage of each of the block and tackle assemblies [ 7 ] shown is as follows:
A rope and pulley system—that is, a block and tackle —is characterised by the use of a single continuous rope to transmit a tension force around one or more pulleys to lift or move a load—the rope may be a light line or a strong cable. This system is included in the list of simple machines identified by Renaissance scientists. [ 10 ] [ 11 ]
If the rope and pulley system does not dissipate or store energy, then its mechanical advantage is the number of parts of the rope that act on the load. This can be shown as follows.
Consider the set of pulleys that form the moving block and the parts of the rope that support this block. If there are p of these parts of the rope supporting the load W, then a force balance on the moving block shows that the tension in each of the parts of the rope must be W/p. This means the input force on the rope is T = W/p. Thus, the block and tackle reduces the input force by the factor p.
The simplest theory of operation for a pulley system assumes that the pulleys and lines are weightless and that there is no energy loss due to friction. It is also assumed that the lines do not stretch.
In equilibrium, the forces on the moving block must sum to zero. In addition the tension in the rope must be the same for each of its parts. This means that the two parts of the rope supporting the moving block must each support half the load.
These are different types of pulley systems:
The mechanical advantage of the gun tackle can be increased by interchanging the fixed and moving blocks so the rope is attached to the moving block and the rope is pulled in the direction of the lifted load. In this case the block and tackle is said to be "rove to advantage." [ 12 ] Diagram 3 shows that now three rope parts support the load W which means the tension in the rope is W/3 . Thus, the mechanical advantage is three.
By adding a pulley to the fixed block of a gun tackle the direction of the pulling force is reversed though the mechanical advantage remains the same, Diagram 3a. This is an example of the Luff tackle.
The mechanical advantage of a pulley system can be analysed using free body diagrams which balance the tension force in the rope with the force of gravity on the load. In an ideal system, the massless and frictionless pulleys do not dissipate energy and allow for a change of direction of a rope that does not stretch or wear. In this case, a force balance on a free body that includes the load, W , and n supporting sections of a rope with tension T , yields:
The ratio of the load to the input tension force is the mechanical advantage MA of the pulley system, [ 13 ]
Thus, the mechanical advantage of the system is equal to the number of sections of rope supporting the load.
A belt and pulley system is characterized by two or more pulleys in common to a belt . This allows for mechanical power , torque , and speed to be transmitted across axles. If the pulleys are of differing diameters, a mechanical advantage is realized.
A belt drive is analogous to that of a chain drive ; however, a belt sheave may be smooth (devoid of discrete interlocking members as would be found on a chain sprocket, spur gear , or timing belt) so that the mechanical advantage is approximately given by the ratio of the pitch diameter of the sheaves only, not fixed exactly by the ratio of teeth as with gears and sprockets.
In the case of a drum-style pulley, without a groove or flanges, the pulley often is slightly convex to keep the flat belt centered. It is sometimes referred to as a crowned pulley. Though once widely used on factory line shafts , this type of pulley is still found driving the rotating brush in upright vacuum cleaners , in belt sanders and bandsaws . [ 14 ] Agricultural tractors built up to the early 1950s generally had a belt pulley for a flat belt (which is what Belt Pulley magazine was named after). It has been replaced by other mechanisms with more flexibility in methods of use, such as power take-off and hydraulics .
Just as the diameters of gears (and, correspondingly, their number of teeth) determine a gear ratio and thus the speed increases or reductions and the mechanical advantage that they can deliver, the diameters of pulleys determine those same factors. Cone pulleys and step pulleys (which operate on the same principle, although the names tend to be applied to flat belt versions and V-belt versions, respectively) are a way to provide multiple drive ratios in a belt-and-pulley system that can be shifted as needed, just as a transmission provides this function with a gear train that can be shifted. V-belt step pulleys are the most common way that drill presses deliver a range of spindle speeds.
With belts and pulleys, friction is one of the most important forces. Some uses for belts and pulleys involve peculiar angles (leading to bad belt tracking and possibly slipping the belt off the pulley) or low belt-tension environments, causing unnecessary slippage of the belt and hence extra wear to the belt. To solve this, pulleys are sometimes lagged. Lagging is the term used to describe the application of a coating, cover or wearing surface with various textured patterns which is sometimes applied to pulley shells. Lagging is often applied in order to extend the life of the shell by providing a replaceable wearing surface or to improve the friction between the belt and the pulley. Notably drive pulleys are often rubber lagged (coated with a rubber friction layer) for exactly this reason. [ 15 ] Applying powdered rosin to the belt may increase the friction temporarily, but may shorten the life of the belt. [ 16 ] | https://en.wikipedia.org/wiki/Pulley |
Pullulan is a polysaccharide consisting of maltotriose units, also known as α-1,4- ;α-1,6- glucan '. Three glucose units in maltotriose are connected by an α-1,4 glycosidic bond , whereas consecutive maltotriose units are connected to each other by an α-1,6 glycosidic bond . Pullulan is produced from starch by the fungus Aureobasidium pullulans . Pullulan is mainly used by the cell to resist desiccation and predation . The presence of this polysaccharide also facilitates diffusion of molecules both into and out of the cell. [ 1 ]
As an edible, mostly tasteless polymer, the chief commercial use of pullulan is in the manufacture of edible films that are used in various breath freshener or oral hygiene products such as Listerine Cool Mint of Johnson and Johnson (USA) and Meltz Super Thin Mints of Avery Bio-Tech Private Ltd. (India). Pullulan and HPMC can also be used as a vegetarian substitute for drug capsules, rather than gelatine. As a food additive , it is known by the E number E1204.
Pullulan has also be explored as natural polymeric biomaterials used to fabricate injectable scaffolding for bone tissue engineering , [ 2 ] cartilage tissue engineering, [ 3 ] and intervertebral disc regeneration. [ 4 ]
This biotechnology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pullulan |
Pullulan bioconjugates are systems that use pullulan as a scaffold to attach biological materials to, such as drugs. These systems can be used to enhance the delivery of drugs to specific environments or the mechanism of delivery. These systems can be used in order to deliver drugs in response to stimuli, create a more controlled and sustained release, and provide a more targeted delivery of certain drugs.
Pullulan is generated by the microbial A. pullulans through the processing mainly of glucose, but can also be produced from maltose , fructose , galactose , sucrose , and mannose . [ 1 ] In a commercial setting, pullulan is obtained from a strain of A. pullulans that is non-toxic, non-pathogenic, and unmodified genetically that is given a liquid form of starch in a set environment. [ 1 ] The pullulan produced can be modified by different conditions such as the nutrients provided, temperature, pH, oxygen content, and other supplements. The microbial needs to be provided with a source of carbon and nitrogen in order to produce pullulan and the ratio of carbon to nitrogen needs to be precise in order to maximize pullulan production. Higher levels of nitrogen than carbon are required as excess carbon can decrease the efficiency of the enzymes and excess nitrogen can increase the production of biomass , but does not affect the pullulan production. [ 1 ] Oxygen is also important for the proliferation of the A. pullulans cells and the production of pullulan. [ 1 ] Further supplements can be used in order to increase the level of pullulan production, such as olive oil and tween 80. [ 1 ]
While the manufacturing conditions of pullulan can be altered in order to increase yield, chemical modifications of pullulan can also be used to alter the properties of the pullulan. The unmodified structure of pullulan contains nine hydroxyl groups attached to the backbone of the molecule, and these hydroxyl groups can be replaced with other functional groups. [ 1 ] Some examples of processes that can modify the functional groups of pullulan include sulfation , esterification , oxidation , etherification , copolymerization , amidification, and others. [ 1 ] Pullulan can be given a negative charge through creating an ester linkage that attaches a carboxylate group to the hydroxyl, which yields a carboxymethyl pullulan. [ 1 ] Pullulan is hydrophilic and can be modified to have hydrophobic functionality by adding a cholesterol group. [ 1 ] The main benefit of the added hydrophobic functionality is that it makes it so the pullulan can form self assembling micelles. [ 1 ] Another notable modification to pullulan is the acetylation of pullulan in order to create pullulan acetate (PA), which also has a hydrophobic functionality. [ 2 ] PA has the benefit of forming self-assembled nanoparticles , which can simplify manufacturing of certain pullulan bioconjugates. [ 2 ] Pullulan and pullulan derivatives can also be folated in order to improve cancer cell targeting as the nanoparticle can be endocytosed into the cancer cells through folate-mediated endocytosis . [ 3 ]
Pullulan bioconjugate systems can be formed to respond to many different stimuli to enhance the release of the drug to the target tissue. These stimuli include pH, temperature, photothermal, electrical, ultrasonic, magnetic, and enzymatic. [ 1 ] The pH is often used to target tumor tissues, as the extracellular pH of tumors is more acidic than the normal cells. [ 3 ]
A pullulan and polydopamine hydrogel loaded with crystal violet demonstrated pH responsive behavior due to the protonation of the polydopamine , which increased the release of the crystal violet in the acidic environment. [ 4 ] The study showed that at a pH of a normal cell's extracellular environment, 7.4, about 60% of the crystal violet was released compared to the 87% release when in a pH of 5.0. [ 4 ] The use of pH responsive systems for the treatment of cancer may aid in the ability to overcome resistance of the drug as well as prevent excess damage to healthy tissue. [ 5 ]
Another pH responsive pullulan system was formed with pullulan and doxorubicin where the doxorubicin is attached to the pullulan by hydrazone bonds. [ 6 ] The drug release of the doxorubicin was tested at two pHs, 7.4 and 5, where the hydrazine is stable at 7.4 and cleaves in acidic environments. [ 6 ] The results from this study supported the results from the pullulan and polydopamine study, as doxorubicin was released faster in the acidic environment than the pH that reflected a normal cell's extracellular environment. [ 6 ]
Temperature can also be used as a trigger to control the drug release from pullulan systems. Thermal responsive pullulan systems can be used in conjunction with thermal generating treatments for cancer in order to improve the treatment. [ 1 ] Nanoparticles composed of periodate oxidized carboxymethyl pullulan crosslinked with two Jeffamines were synthesized and demonstrated that the nanoparticle size could be decreased with increased temperature. [ 7 ] The nanoparticles decrease in size with increasing temperature due to the increased temperature promoting the hydrophobic interactions of the structure. [ 7 ] Altering the temperature can induce heating or cooling dynamics that are reversible, which allows for unique properties in terms of drug release. [ 3 ] Pullulan can be altered with photosensitizers in order to provide a controlled thermal reaction in a target area. [ 1 ] Spiropyrane can be added to pullulan in order to act as a photosensitizer. [ 3 ]
Electrical stimuli can be used to alter the delivery of drugs through pullulan constructs. A copolymer polyacrylamide-graft-pullulan was synthesized and used for transdermal delivery of rivastigmine tartarate. [ 8 ] In this study, the use of electric stimuli demonstrated the ability to increase the diffusion rate and in a way acted as a controllable switch to control diffusion rate. [ 8 ] Pullulan systems can be used to enhance ultrasound imaging, as pullulan-graft-poly(carboxybetaine methacrylate) demonstrated the ability to generate carbon dioxide in response to ultrasound, which enhanced the contrast. [ 1 ] Superparamagnetic iron oxide nanoparticles (SPIONs) have been generated which have magnetic properties, which showed to improve uptake and also decrease the cytotoxicity. [ 1 ] Enzymes can also be used to trigger drug release mechanisms, such as how esterase has been used to cleave photosensitizers from pullulan in order to increase the photodynamic reaction. [ 1 ]
As demonstrated in the last example, these stimuli response mechanisms do not have to be independent. They can be used in combinations in order to improve the efficacy of the drug delivery.
When pullulan is modified with a hydrophobic functionality, such as cholesterol, the pullulan derivative forms self-assembled vesicles that can encapsulate a hydrophobic drug. [ 1 ] With the hydrophobic functional group, the pullulan derivative is an amphiphilic molecule, which when in an aqueous environment forms a micelle . This micelle has a hydrophilic exterior with the pullulan backbone and a hydrophobic core due to the functional group added to the pullulan. [ 3 ] The nanoparticles formed are spherical, have an average size of 20-30 nanometers according to dynamic light scattering measurements, and are able to be maintained in physiological conditions. [ 9 ] Cholesteryl-pullulan (CHP) is an example of a pullulan derivative that is capable of forming self-assembled mechanisms and has been used to anticancer drugs. [ 5 ] The size of the self-assembled nanoparticle can be adjusted by changing the amount of cholesterol attached to the pullulan. The higher the number of cholesterol substitutions, the smaller the nanoparticle created. [ 3 ] PA and folated PA (FPA) have been created and form self-assembled nanoparticles, which have been used to deliver epirubicin . [ 2 ] Pullulan derivatives have been combined with gold to form self-assembled nanoparticles that were capable of loading doxorubicin. [ 10 ] Pullulan-dexamethasone bioconjugates have been created which also exhibit self-assembling nanoparticles that have an approximate size of 400 nanometers and have shown to extend the release of the dexamethasone . [ 11 ]
Pullulan is used as a bioconjugate platform in order to enhance the delivery of chemotherapeutics . Pullulan derivatives can be created in order to specifically target cancer cells. In terms of cancer therapeutics, pullulan can be used to encapsulate hydrophobic cancer therapeutics through self assembled micelles, can be linked to drugs in the form of a bioconjugate, and can be utilized for its pH responsive nature. [ 12 ] Cancer drugs that have been used with pullulan include doxorubicin, paclitaxel , epirubicin, mitoxantrone , and 10-hydroxycamptothecin. [ 12 ]
Pullulan derivatives can be folated in order to take advantage of the higher density of folate receptors on cancer cells. [ 5 ] Doxorubicin has been loaded into pullulan micelles and folated micelles for targeted delivery to cancer cells through folate mediated endocytosis. [ 5 ] The use of folated pullulan nanoparticles shows lower toxicity and higher levels of drug accumulation within the cancer cells. [ 5 ] The pH sensitivity of pullulan also makes pullulan a good candidate for chemotherapeutic delivery, as the pullulan can be altered by the acidic environment of the tumor to provide targeted release. [ 5 ]
Pullulan nanoparticles have also been used to deliver paclitaxel and proved to be stable under different environmental conditions. [ 5 ] Curcumin pullulan derivatives have a great effect in targeting hepatocarcinoma cells, as the pullulan increases the ability of curcumin to solubilize, and therefore allows for the cells to properly uptake the curcumin. [ 5 ] Pullulan micelles can also be used to deliver genes, such as p53 , in order to suppress tumor development. [ 5 ] The pullulan protects the RNA or DNA from degradation from enzymes within the body, which enables the ability of gene therapy for treatment of cancer. [ 5 ] The addition of ascorbic acid to pullulan bioconjugates has demonstrated antimetastic properties, which can improve cation modified pullulan derivatives. [ 5 ] There are many factors that make pullulan a suitable drug delivery platform for cancer therapeutics. Some of these factors include the chemical modifications, the pH responsiveness, as well as the ability for the pullulan to form self-assembled micelles that can protect the therapeutics from the immune system. [ 5 ]
In vitro research has been conducted that synthesized pullulan acetate nanoparticles altered with folate and then loaded with epirubicin. [ 2 ] This study showed that the use of folate modification to pullulan increased the cytotoxicity of the drug as well as released the drug at a faster rate than unfolated pullulan acetate. [ 2 ] Another pullulan folated system was researched, where pullulan gold nanoparticles were folated and encapsulated doxorubicin. [ 10 ] The pullulan gold nanoparticle provided pH controlled release of the doxorubicin and demonstrated lower toxicity to non cancer cells than doxorubicin without a carrier platform. [ 10 ] CHP systems have been developed to deliver protein vaccines and have shown success in generating different degrees of immune responses mostly with CD4 T cells. [ 3 ] Biotinylated pullulan acetate (BPA) have been created as they have vitamin H functionality, which helps increase the level of interaction with cancer cells. [ 13 ] The drawback with vitamin H is that increasing the vitamin H increases the interaction of the nanoparticles with cancer cells, but also lowers the concentration of the drug in the nanoparticle due to the altered hydrophobicity. [ 13 ] Modifications to pullulan can be made to enhance the controlled release of drugs, such as pullulan-g-poly(L-lactide) due to the water insoluble nature of the polymeric component. [ 13 ] Doxorubicin has been conjugated to pullulan through hydrazone bonds, but was shown to have lower cytotoxic activity than doxorubicin without a delivery platform. [ 6 ]
The ocular space is a difficult area to deliver drugs into and therefore special drug delivery considerations need to be taken into account. Intravitreal injections are a common method of delivery drugs to the eye. Pullulan systems can be utilized in intravitreal injections in order to develop drugs that are long lasting and therefore require less frequent injections. [ 11 ] One study looked at different chemical linkers to pullulan to test efficacy of said linkers in extending the release of rhodamine B (RhB). [ 9 ] This study used ether (Pull-Et-RhB), hydrazone (Pull-Hy-RhB), and ester (Pull-Es-RhB) linkers to generate pullulan bioconjugates. [ 9 ] Ex vivo modeling of the drug release indicated that the drug diffuses slower in the vitreous humor than in water. [ 9 ] The ether bond was stable at differing pH, while the hydrazone and ester bond released the drug faster in more acidic pH, that reflected the pH of endosomes. [ 9 ] The Pull-Hy-RhB demonstrated that this drug delivery system was capable of delivering the drug to the retina through testing of the blood in the vessels of the retina. [ 9 ]
Further studies have investigated the creation and efficacy of pullulan-dexamethasone bioconjugates for intravitreal injections. [ 14 ] The study synthesized self-assembling pullulan nanoparticles with dexamethasone attached through hydrazone bonds. [ 11 ] This study reiterated that the drug release was fast in acidic pH that mimicked the pH of lysosomes. [ 11 ] The variation in drug release was that at the pH of the vitreous humor the drug took two weeks to release half of the drug, while took only two days, when in a lysosomal pH. [ 11 ] Pharmacokinetic analysis was performed on this bioconjugate system and revealed that dexamethasone was released in the vitreous humor and that it remained for sixteen days and that a substantial amount of the bioconjugate left the vitreous humor intact. [ 14 ] Overall the studies regarding pullulan bioconjugates for the application in intravitreal injections demonstrate that pullulan can provide sustained release as well as allow the drug to reach the retina.
Pullulan has many other applications. Pullulan can be used as a scaffold material for stem cells, such as mesenchymal stem cells . [ 1 ] Pullulan can be conjugated with photosensitive molecules in order to be used with photodynamic therapy. [ 1 ] Pullulan can be modified to be a contrast agent for MRI in multiple ways such as oxidation, iron-oxide conjugates, and cation conjugates. [ 1 ] Pullulan has been thiolated in order to generate mucoadhesive properties. [ 15 ] This mucoadhesive system has been further modified by polyaminating pullulan to provide sustained drug release. [ 15 ] A study developed a transdermal pullulan system that is capable of delivering rivastigmine tartarate in response to external electrical stimuli. [ 8 ] Pullulan systems can be loaded with a plethora of different drugs including anti-inflammatory, antilipidemic, and antiglycemic drugs. [ 12 ] Pullulan systems can be used to treat heart conditions through the delivery of beta blockers and inhibitors of angiotensin-converting enzyme . [ 12 ] Pullulan can also be utilized in regards to bone disease as they can be used to deliver bisphosphonates and can help to image bone regeneration through MRI. [ 12 ] | https://en.wikipedia.org/wiki/Pullulan_bioconjugate |
X-ray pulsar-based navigation and timing (XNAV) or simply pulsar navigation is a navigation technique whereby the periodic X-ray signals emitted from pulsars are used to determine the location of a vehicle, such as a spacecraft in deep space. A vehicle using XNAV would compare received X-ray signals with a database of known pulsar frequencies and locations. Similar to GPS , this comparison would allow the vehicle to calculate its position accurately (±5 km). The advantage of using X-ray signals over radio waves is that X-ray telescopes can be made smaller and lighter. [ 1 ] [ 2 ] [ 3 ] Experimental demonstrations have been reported in 2018. [ 4 ]
The Advanced Concepts Team of ESA studied in 2003 the feasibility of x-ray pulsar navigation [ 5 ] in collaboration with the Universitat Politecnica de Catalunya in Spain. After the study, the interest in the XNAV technology within the European Space Agency was consolidated leading, in 2012, to two different and more detailed studies performed by GMV AEROSPACE AND DEFENCE (ES) and the National Physical Laboratory (UK). [ 6 ]
In 2014, a feasibility study was carried out by the National Aerospace Laboratory of Amsterdam , for use of pulsars in place of GPS in navigation. The advantage of pulsar navigation would be more available signals than from satnav constellations , being unjammable , with the broad range of frequencies available, and security of signal sources from destruction by anti-satellite weapons . [ 13 ]
Among pulsars, millisecond pulsars are good candidate to be space-time references. [ 14 ] In particular, extraterrestrial intelligence might encode rich information using millisecond pulsar signals, and the metadata about XNAV is likely to be encoded by reference to millisecond pulsars. [ 15 ] Finally, it has been suggested that advanced extraterrestrial intelligence might have tweaked or engineered millisecond pulsars for the goals of timing, navigation and communication. [ 16 ] | https://en.wikipedia.org/wiki/Pulsar-based_navigation |
A pulsar clock is a clock which depends on counting radio pulses emitted by pulsars .
The first pulsar clock in the world was installed in St. Catherine's Church, Gdańsk , Poland, in 2011. [ 1 ] It was the first clock to count the time using a signal source outside the Solar System , and represents the second type of clock to measure time using a signal source outside the Earth, after sundials . The pulsar clock consists of a radiotelescope with 16 antennas, which receive signals from six designated pulsars. Digital processing of the pulsar signals is done by an FPGA device. [ 2 ]
On October 5, 2011, a display showing the exact time of the pulsar clock, as a repeater of Gdańsk's pulsar clock, was installed in the European Parliament in Brussels , Belgium . [ 3 ]
This astronomy -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pulsar_clock |
In fluid dynamics , a flow with periodic variations is known as pulsatile flow , or as Womersley flow . The flow profiles was first derived by John R. Womersley (1907–1958) in his work with blood flow in arteries . [ 1 ] The cardiovascular system of chordate animals is a very good example where pulsatile flow is found, but pulsatile flow is also observed in engines and hydraulic systems , as a result of rotating mechanisms pumping the fluid.
The pulsatile flow profile is given in a straight pipe by
where:
The pulsatile flow profile changes its shape depending on the Womersley number
For α ≲ 2 {\displaystyle \alpha \lesssim 2} , viscous forces dominate the flow, and the pulse is considered quasi-static with a parabolic profile.
For α ≳ 2 {\displaystyle \alpha \gtrsim 2} , the inertial forces are dominant in the central core, whereas viscous forces dominate near the boundary layer. Thus, the velocity profile gets flattened, and phase between the pressure and velocity waves gets shifted towards the core. [ citation needed ]
The Bessel function at its lower limit becomes [ 2 ]
which converges to the Hagen-Poiseuille flow profile for steady flow for
or to a quasi-static pulse with parabolic profile when
In this case, the function is real, because the pressure and velocity waves are in phase.
The Bessel function at its upper limit becomes [ 2 ]
which converges to
This is highly reminiscent of the Stokes layer on an oscillating flat plate, or the skin-depth penetration of an alternating magnetic field into an electrical conductor.
On the surface u ( r = R , t ) = 0 {\displaystyle u(r=R,t)=0} , but the exponential term becomes negligible once α ( 1 − r / R ) {\displaystyle \alpha (1-r/R)} becomes large, the velocity profile becomes almost constant and independent of the viscosity. Thus, the flow simply oscillates as a plug profile in time according to the pressure gradient,
However, close to the walls, in a layer of thickness O ( α − 1 ) {\displaystyle {\mathcal {O}}(\alpha ^{-1})} , the velocity adjusts rapidly to zero. Furthermore, the phase of the time oscillation varies quickly with position across the layer. The exponential decay of the higher frequencies is faster.
For deriving the analytical solution of this non-stationary flow velocity profile, the following assumptions are taken: [ 3 ] [ 4 ]
Thus, the Navier-Stokes equation and the continuity equation are simplified as
and
respectively. The pressure gradient driving the pulsatile flow is decomposed in Fourier series ,
where i {\displaystyle i} is the imaginary number , ω {\displaystyle \omega } is the angular frequency of the first harmonic (i.e., n = 1 {\displaystyle n=1} ), and P n ′ {\displaystyle P'_{n}} are the amplitudes of each harmonic n {\displaystyle n} . Note that, P 0 ′ {\displaystyle P'_{0}} (standing for n = 0 {\displaystyle n=0} ) is the steady-state pressure gradient, whose sign is opposed to the steady-state velocity (i.e., a negative pressure gradient yields positive flow). Similarly, the velocity profile is also decomposed in Fourier series in phase with the pressure gradient, because the fluid is incompressible,
where U n {\displaystyle U_{n}} are the amplitudes of each harmonic of the periodic function, and the steady component ( n = 0 {\displaystyle n=0} ) is simply Poiseuille flow
Thus, the Navier-Stokes equation for each harmonic reads as
With the boundary conditions satisfied, the general solution of this ordinary differential equation for the oscillatory part ( n ≥ 1 {\displaystyle n\geq 1} ) is
where J 0 ( ⋅ ) {\displaystyle J_{0}(\cdot )} is the Bessel function of first kind and order zero, Y 0 ( ⋅ ) {\displaystyle Y_{0}(\cdot )} is the Bessel function of second kind and order zero, A n {\displaystyle A_{n}} and B n {\displaystyle B_{n}} are arbitrary constants, and α = R √ ( ω ρ / μ ) {\displaystyle \alpha =R\surd (\omega \rho /\mu )} is the dimensionless Womersley number . The axisymmetric boundary condition ( ∂ U n / ∂ r | r = 0 = 0 {\displaystyle \partial U_{n}/\partial r|_{r=0}=0} ) is applied to show that B n = 0 {\displaystyle B_{n}=0} for the derivative of above equation to be valid, as the derivatives J 0 ′ {\displaystyle J_{0}'} and Y 0 ′ {\displaystyle Y_{0}'} approach infinity. Next, the wall non-slip boundary condition ( U n ( R ) = 0 {\displaystyle U_{n}(R)=0} ) yields A n = − i P n ′ ρ n ω 1 J 0 ( α n 1 / 2 i 3 / 2 ) {\displaystyle A_{n}=-{\frac {i\,P'_{n}}{\rho \,n\,\omega }}{\frac {1}{J_{0}\left(\alpha \,n^{1/2}\,i^{3/2}\right)}}} . Hence, the amplitudes of the velocity profile of the harmonic n {\displaystyle n} becomes
where Λ n = α n 1 / 2 i 3 / 2 {\displaystyle \Lambda _{n}=\alpha \,n^{1/2}\,i^{3/2}} is used for simplification.
The velocity profile itself is obtained by taking the real part of the complex function resulted from the summation of all harmonics of the pulse,
Flow rate is obtained by integrating the velocity field on the cross-section. Since,
then
To compare the shape of the velocity profile, it can be assumed that
where
is the shape function. [ 5 ] It is important to notice that this formulation ignores the inertial effects. The velocity profile approximates a parabolic profile or a plug profile, for low or high Womersley numbers, respectively.
For straight pipes, wall shear stress is
The derivative of a Bessel function is
Hence,
If the pressure gradient P n ′ {\displaystyle P'_{n}} is not measured, it can still be obtained by measuring the velocity at the centre line. The measured velocity has only the real part of the full expression in the form of
Noting that J 0 ( 0 ) = 1 {\displaystyle J_{0}(0)=1} , the full physical expression becomes
at the centre line. The measured velocity is compared with the full expression by applying some properties of complex number. For any product of complex numbers ( C = A B {\displaystyle C=AB} ), the amplitude and phase have the relations | C | = | A | | B | {\displaystyle |C|=|A||B|} and ϕ C = ϕ A + ϕ B {\displaystyle \phi _{C}=\phi _{A}+\phi _{B}} , respectively. Hence,
and
which finally yield | https://en.wikipedia.org/wiki/Pulsatile_flow |
Pulsatile secretion is a biochemical phenomenon observed in a wide variety of cell and tissue types, in which chemical products are secreted in a regular temporal pattern. The most common cellular products observed to be released in this manner are intercellular signaling molecules such as hormones or neurotransmitters . Examples of hormones that are secreted pulsatilely include insulin , thyrotropin , TRH , gonadotropin-releasing hormone (GnRH) and growth hormone (GH). In the nervous system, pulsatility is observed in oscillatory activity from central pattern generators . In the heart, pacemakers are able to work and secrete in a pulsatile manner. A pulsatile secretion pattern is critical to the function of many hormones in order to maintain the delicate homeostatic balance necessary for essential life processes, such as development and reproduction . Variations of the concentration in a certain frequency can be critical to hormone function, as evidenced by the case of GnRH agonists , which cause functional inhibition of the receptor for GnRH due to profound downregulation in response to constant (tonic) stimulation. Pulsatility may function to sensitize target tissues to the hormone of interest and upregulate receptors, leading to improved responses. This heightened response may have served to improve the animal's fitness in its environment and promote its evolutionary retention.
Pulsatile secretion in its various forms is observed in:
Nervous system control over hormone release is based in the hypothalamus , from which the neurons that populate the pariventricular and arcuate nuclei originate. [ 1 ] These neurons project to the median eminence, where they secrete releasing hormones into the hypophysial portal system connecting the hypothalamus with the pituitary gland . There, they dictate endocrine function via the four HPG axes. [ 1 ] Recent studies have begun to offer evidence that many pituitary hormones which have been observed to be released episodically are preceded by pulsatile secretion of their associated releasing hormone from the hypothalamus in a similar pulsatile fashion. Novel research into the cellular mechanisms associated with pituitary hormone pulsatility, such as that observed for luteinizing hormone (LH) and follicle-stimulating hormone (FSH), has indicated similar pulses into the hypophyseal vessels of gonadotropin-releasing hormone (GnRH). [ 2 ] [ 3 ]
LH is released from the pituitary gland along with FSH in response to GnRH release into the hypophyseal portal system. [ 4 ] Pulsatile GnRH release causes pulsatile LH and FSH release to occur, which modulates and maintains appropriate levels of bioavailable gonadal hormone— testosterone in males and estradiol in females—subject to the requirements of a superior feedback loop. [ 3 ] In females the levels of LH is typically 1–20 IU/L during the reproductive period and is estimated to be 1.8–8.6 IU/L in males over 18 years of age. [ 5 ] [ 6 ] [ 7 ]
Regular pulses of glucocorticoids, mainly cortisol in the case of humans, are released regularly from the adrenal cortex following a circadian pattern in addition to their release as a part of the stress response . [ 8 ] [ 9 ] Cortisol release follows a high frequency of pulses forming an ultradian rhythm , with amplitude being the primary variation in its release, so that the signal is amplitude modulated . [ 8 ] Glucocorticoid pulsatlity has been observed to follow a circadian rhythm, with highest levels observed before waking and before anticipated mealtimes. [ 8 ] [ 9 ] This pattern in amplitude of release is observed to be consistent across vertebrates. [ 9 ] Studies done in humans, rats, and sheep have also observed a similar circadian pattern of release of adrenocorticotropin (ACTH) shortly preceding the pulse in the resulting corticosteroid. [ 8 ] It is currently hypothesized that the observed pulsatility of ACTH and glucocorticoids is driven via pulsatility of corticotropin-releasing hormone (CRH), however there exist few data to support this due to difficulty in measuring CRH. [ 8 ]
The secretion pattern of thyrotropin (TSH) is shaped by infradian , circadian and ultradian rhythms. Infradian rhythmis are mainly represented by circannual variation mirroring the seasonality of thyroid function. [ 10 ] Circadian rhythms lead to peak ( acrophase ) secretion around midnight and nadir concentrations around noon and in the early afternoon. [ 11 ] [ 12 ] A similar pattern is observed for triiodothyronine , however with a phase shift. [ 12 ] Pulsatile release contributes to the ultradian rhythm of TSH concentration with about 10 pulses per 24 hours. [ 13 ] [ 14 ] [ 15 ] The amplitude of the circadian and ultradian rhythms is reduced in severe non-thyroidal illness syndrome (TACITUS). [ 16 ] [ 17 ]
Contemporary theories assume that autocrine and paracrine (ultrashort) feedback mechanisms controlling TSH secretion within the anterior pituitary gland are a major factor contributing to the evolution of its pulsatility. [ 18 ] [ 19 ] [ 20 ]
Pulsatile insulin secretion from individual beta cells is driven by oscillation of the calcium concentration in the cells. In beta cells lacking contact (i.e. outside islets of Lagerhans ), the periodicity of these oscillations is rather variable (2–10 min). However, within an islet of Langerhans, the oscillations become synchronized by electrical coupling between closely located beta cells that are connected by gap junctions , and the periodicity is more uniform (3–6 min). [ 21 ] In addition to gap junctions, pulse coordination is managed by ATP signaling. Alpha and beta cells in the pancreas also share secrete factors in a similar pulsatile manner. [ 22 ] | https://en.wikipedia.org/wiki/Pulsatile_secretion |
Pulsation reactor technology is a thermal procedure for manufacturing fine powders with precisely defined properties.
Pulsation reactor technology is a thermal procedure with a special functional principle that results in reaction parameters and a reaction medium, and which ultimately leads to other property parameters in terms of surface, reactivity , homogeneity and particle size of the powder material.
The technology has proven particularly effective in the manufacture of ceramic and submicroscale powders, as well as in the production of highly active catalysts . Also, simple oxides such as zirconium oxide with doping elements or mixed oxides like spinel can be produced in the pulsation reactor.
A British scientist called B. Higgins discovered the phenomenon of the pulsating flame in 1777. The phenomenon was described in specialist literature as the “ singing flame ”. However, no suitable application was found until 1930. Paul Schmidt was the first to employ the pulsating flame with the invention of the ARGUS-Schmidt pipe (Figure 1). Pulsating combustion was also used to generate hot gas for heating purposes and to fire boilers.
The principle was tested in the eighties at the SKET Institute in Weimar to determine the suitability of pulsating combustion as a unit for performing thermal, material-modifying processes. The unit was already being referred to as a pulsation reactor by the Institute at that time. As well as the process of cement clinker firing, the manufacture of polishing agents from iron oxalate for the optical industry and the manufacture of surface-active catalyst substrates from gibbsite were also investigated.
Pulsation reactor technology came to the fore from the nineties through its use in environmental technology, particularly in sludge drying and the regeneration of resin-bonded foundry sands. From 2000 the pulsation reactor was used to produce catalytic powders on an industrial scale.
The principle of pulsating combustion was developed over the years by the company IBU-tec advanced materials AG (which emerged from the SKET Institute and still exists today), which finally tested and commissioned another test facility in 2008. Thanks to the continuous optimisation of the reactors , it was now possible to use an oxidising , inert or reducing hot gas atmosphere to treat materials as required. It also emerged that the improved plant was particularly suitable for manufacturing fine particles and catalytic powders.
Today pulsation reactor technology has become established in chemical process engineering for manufacturing active particles with microstructural properties.
Fundamentally, a pulsation reactor can be described as a periodically transient tube-type reactor that can be used to thermally treat gas-borne matter. The pulsating flow of hot gas is generated within a hot gas generator in the reactor by burning natural gas or hydrogen with ambient air. The hot gas flows through the so-called “resonance tube” into which reactants in powder, liquid or gas form can be added. The reactant is treated by hot gas flowing through the resonance tube and this process ends through suitable cooling. The finished product is separated in a cleanable filter. The product can be removed throughout the ongoing process using a sluice system and collected in barrels or big bags . The risk of the product contaminating the environment can be completely excluded through the vacuum present in the reactor, including the filter.
An almost tube-like flow with an almost constant temperature across the pipe diameter is generated in the resonance tube (the treatment area for the reactant ) through the pulsating flow of hot gas. This tube-shaped flow results in a narrow residence time distribution . Furthermore, the pulsating hot gas flow results in an increased convective heat and mass transfer to and/or from the particles.
Hot gas can be generated in two different ways. Either the hot gas generator works with a high level of excess air (λ ≥ 2) or the hot gas atmosphere can be generated with little oxygen or none at all. The hot gas temperatures in the pulsation reactor range from 250° - 1,350 °C (expansion to higher temperatures is in progress). However, the actual treatment temperature may differ significantly from these values after the reactant has been added. The necessary treatment temperature can be determined through systematic experiments with temperature variation.
In addition to the treatment temperature and the type of hot gas atmosphere, pulsation reactors also provide the option of adjusting the frequency and amplitude of the pulsation (i.e. the spatially oscillating flow of hot gas) according to the material to be treated, without changing the geometry of the plant.
The pulsating flow of hot gas in the pulsation reactor enables very high heating rates and a significantly increased transfer of heat from the hot gas to the particle in the thermal process. This is beneficial for determining a specific particle size, surface condition and phase composition.
The use of combustible reactants is not essential with the pulsation reactor. Both combustible and non-combustible reactants can be used in it.
The even temperature distribution in the reactor well and the narrow residence time distribution prevents the formation of hard aggregates whilst allowing the homogenous treatment of material.
The temperature range covered by the pulsation reactor is considerably higher than in spray dryers , for example, so that gentle drying is only possible to a certain extent but a combination of drying and calcination is feasible. | https://en.wikipedia.org/wiki/Pulsation_reactor |
A pulse-Doppler radar is a radar system that determines the range to a target using pulse-timing techniques, and uses the Doppler effect of the returned signal to determine the target object's velocity. It combines the features of pulse radars and continuous-wave radars , which were formerly separate due to the complexity of the electronics .
The first operational pulse-Doppler radar was in the CIM-10 Bomarc , an American long range supersonic missile powered by ramjet engines, and which was armed with a W40 nuclear weapon to destroy entire formations of attacking enemy aircraft. [ 1 ] Pulse-Doppler systems were first widely used on fighter aircraft starting in the 1960s. Earlier radars had used pulse-timing in order to determine range and the angle of the antenna (or similar means) to determine the bearing. However, this only worked when the radar antenna was not pointed down; in that case the reflection off the ground overwhelmed any returns from other objects. As the ground moves at the same speed but opposite direction of the aircraft, Doppler techniques allow the ground return to be filtered out, revealing aircraft and vehicles. This gives pulse-Doppler radars " look-down/shoot-down " capability. A secondary advantage in military radar is to reduce the transmitted power while achieving acceptable performance for improved safety of stealthy radar. [ 2 ]
Pulse-Doppler techniques also find widespread use in meteorological radars , allowing the radar to determine wind speed from the velocity of any precipitation in the air. Pulse-Doppler radar is also the basis of synthetic aperture radar used in radar astronomy , remote sensing and mapping. In air traffic control , they are used for discriminating aircraft from clutter. Besides the above conventional surveillance applications, pulse-Doppler radar has been successfully applied in healthcare, such as fall risk assessment and fall detection, for nursing or clinical purposes. [ 3 ]
The earliest radar systems failed to operate as expected. The reason was traced to Doppler effects that degrade performance of systems not designed to account for moving objects. Fast-moving objects cause a phase-shift on the transmit pulse that can produce signal cancellation. Doppler has maximum detrimental effect on moving target indicator systems, which must use reverse phase shift for Doppler compensation in the detector.
Doppler weather effects (precipitation) were also found to degrade conventional radar and moving target indicator radar, which can mask aircraft reflections. This phenomenon was adapted for use with weather radar in the 1950s after declassification of some World War II systems.
Pulse-Doppler radar was developed during World War II to overcome limitations by increasing pulse repetition frequency . This required the development of the klystron , the traveling wave tube , and solid state devices. Early pulse-dopplers were incompatible with other high power microwave amplification devices that are not coherent , but more sophisticated techniques were developed that record the phase of each transmitted pulse for comparison to returned echoes.
Early examples of military systems includes the AN/SPG-51 B developed during the 1950s specifically for the purpose of operating in hurricane conditions with no performance degradation.
The Hughes AN/ASG-18 Fire Control System was a prototype airborne radar/combination system for the planned North American XF-108 Rapier interceptor aircraft for the United States Air Force, and later for the Lockheed YF-12 . The US's first pulse-Doppler radar, [ 4 ] the system had look-down/shoot-down capability and could track one target at a time.
It became possible to use pulse-Doppler radar on aircraft after digital computers were incorporated in the design. Pulse-Doppler provided look-down/shoot-down capability to support air-to-air missile systems in most modern military aircraft by the mid 1970s.
Pulse-Doppler systems measure the range to objects by measuring the elapsed time between sending a pulse of radio energy and receiving a reflection of the object. Radio waves travel at the speed of light , so the distance to the object is the elapsed time multiplied by the speed of light, divided by two – there and back.
Pulse-Doppler radar is based on the Doppler effect , where movement in range produces frequency shift on the signal reflected from the target. Doppler frequency = 2 × transmit frequency × radial velocity C . {\displaystyle {\text{Doppler frequency}}={\frac {2\times {\text{transmit frequency}}\times {\text{radial velocity}}}{C}}.}
Radial velocity is essential for pulse-Doppler radar operation. As the reflector moves between each transmit pulse, the returned signal has a phase difference, or phase shift , from pulse to pulse. This causes the reflector to produce Doppler modulation on the reflected signal.
Pulse-Doppler radars exploit this phenomenon to improve performance.
The amplitude of the successively returning pulse from the same scanned volume is I = I 0 sin ( 4 π ( x 0 + v Δ t ) λ ) = I 0 sin ( Θ 0 + Δ Θ ) , {\displaystyle I=I_{0}\sin \left({\frac {4\pi (x_{0}+v\Delta t)}{\lambda }}\right)=I_{0}\sin(\Theta _{0}+\Delta \Theta ),} where
So Δ Θ = 4 π v Δ t λ . {\displaystyle \Delta \Theta ={\frac {4\pi v\Delta t}{\lambda }}.}
This allows the radar to separate the reflections from multiple objects located in the same volume of space by separating the objects using a spread spectrum to segregate different signals: v = target speed = λ Δ Θ 4 π Δ t , {\displaystyle v={\text{target speed}}={\frac {\lambda \Delta \Theta }{4\pi \Delta t}},} where Δ Θ {\displaystyle \Delta \Theta } is the phase shift induced by range motion.
Rejection speed is selectable on pulse-Doppler aircraft-detection systems so nothing below that speed will be detected. A one degree antenna beam illuminates millions of square feet of terrain at 10 miles (16 km) range, and this produces thousands of detections at or below the horizon if Doppler is not used.
Pulse-Doppler radar uses the following signal processing criteria to exclude unwanted signals from slow-moving objects. This is also known as clutter rejection. [ 5 ] Rejection velocity is usually set just above the prevailing wind speed (10 to 100 mph or 20 to 160 km/h). The velocity threshold is much lower for weather radar . [ 6 ] | Doppler frequency × C 2 × transmit frequency | > velocity threshold . {\displaystyle \left\vert {\frac {{\text{Doppler frequency}}\times C}{2\times {\text{transmit frequency}}}}\right\vert >{\text{velocity threshold}}.}
In airborne pulse-Doppler radar, the velocity threshold is offset by the speed of the aircraft relative to the ground. | Doppler frequency × C 2 × transmit frequency − ground speed × cos Θ | > velocity threshold , {\displaystyle \left\vert {\frac {{\text{Doppler frequency}}\times C}{2\times {\text{transmit frequency}}}}-{\text{ground speed}}\times \cos \Theta \right\vert >{\text{velocity threshold}},} where Θ {\displaystyle \Theta } is the angle offset between the antenna position and the aircraft flight trajectory.
Surface reflections appear in almost all radar. Ground clutter generally appears in a circular region within a radius of about 25 miles (40 km) near ground-based radar. This distance extends much further in airborne and space radar. Clutter results from radio energy being reflected from the earth surface, buildings, and vegetation. Clutter includes weather in radar intended to detect and report aircraft and spacecraft. [ 7 ]
Clutter creates a vulnerability region in pulse-amplitude time-domain radar . Non-Doppler radar systems cannot be pointed directly at the ground due to excessive false alarms, which overwhelm computers and operators. Sensitivity must be reduced near clutter to avoid overload. This vulnerability begins in the low-elevation region several beam widths above the horizon, and extends downward. This also exists throughout the volume of moving air associated with weather phenomenon.
Pulse-Doppler radar corrects this as follows.
Clutter rejection capability of about 60 dB is needed for look-down/shoot-down capability, and pulse-Doppler is the only strategy that can satisfy this requirement. This eliminates vulnerabilities associated with the low-elevation and below-horizon environment.
Pulse compression and moving target indicator (MTI) provide up to 25 dB sub-clutter visibility. An MTI antenna beam is aimed above the horizon to avoid an excessive false alarm rate, which renders systems vulnerable. Aircraft and some missiles exploit this weakness using a technique called flying below the radar to avoid detection ( nap-of-the-earth ). This flying technique is ineffective against pulse-Doppler radar.
Pulse-Doppler provides an advantage when attempting to detect missiles and low observability aircraft flying near terrain, sea surface, and weather.
Audible Doppler and target size support passive vehicle type classification when identification friend or foe is not available from a transponder signal . Medium pulse repetition frequency (PRF) reflected microwave signals fall between 1,500 and 15,000 cycle per second, which is audible. This means a helicopter sounds like a helicopter, a jet sounds like a jet, and propeller aircraft sound like propellers. Aircraft with no moving parts produce a tone. The actual size of the target can be calculated using the audible signal. [ citation needed ]
Ambiguity processing is required when target range is above the red line in the graphic, which increases scan time.
Scan time is a critical factor for some systems because vehicles moving at or above the speed of sound can travel one mile (1.6 km) every few seconds, like the Exocet , Harpoon , Kitchen , and air-to-air missiles . The maximum time to scan the entire volume of the sky must be on the order of a dozen seconds or less for systems operating in that environment.
Pulse-Doppler radar by itself can be too slow to cover the entire volume of space above the horizon unless fan beam is used. This approach is used with the AN/SPS 49(V)5 Very Long Range Air Surveillance Radar, which sacrifices elevation measurement to gain speed. [ 9 ]
Pulse-Doppler antenna motion must be slow enough so that all the return signals from at least 3 different PRFs can be processed out to the maximum anticipated detection range. This is known as dwell time . [ 10 ] Antenna motion for pulse-Doppler must be as slow as radar using MTI .
Search radar that include pulse-Doppler are usually dual mode because best overall performance is achieved when pulse-Doppler is used for areas with high false alarm rates (horizon or below and weather), while conventional radar will scan faster in free-space where false alarm rate is low (above horizon with clear skies).
The antenna type is an important consideration for multi-mode radar because undesirable phase shift introduced by the radar antenna can degrade performance measurements for sub-clutter visibility.
The signal processing enhancement of pulse-Doppler allows small high-speed objects to be detected in close proximity to large slow moving reflectors. To achieve this, the transmitter must be coherent and should produce low phase noise during the detection interval, and the receiver must have large instantaneous dynamic range .
Pulse-Doppler signal processing also includes ambiguity resolution to identify true range and velocity.
The received signals from multiple PRF are compared to determine true range using the range ambiguity resolution process.
The received signals are also compared using the frequency ambiguity resolution process.
The range resolution is the minimal range separation between two objects traveling at the same speed before the radar can detect two discrete reflections: range resolution = C PRF × ( number of samples between transmit pulses ) . {\displaystyle {\text{range resolution}}={\frac {C}{{\text{PRF}}\times ({\text{number of samples between transmit pulses}})}}.}
In addition to this sampling limit, the duration of the transmitted pulse could mean that returns from two targets will be received simultaneously from different parts of the pulse.
The velocity resolution is the minimal radial velocity difference between two objects traveling at the same range before the radar can detect two discrete reflections: velocity resolution = C × PRF 2 × transmit frequency × filter size in transmit pulses . {\displaystyle {\text{velocity resolution}}={\frac {C\times {\text{PRF}}}{2\times {\text{transmit frequency}}\times {\text{filter size in transmit pulses}}}}.}
Pulse-Doppler radar has special requirements that must be satisfied to achieve acceptable performance.
Pulse-Doppler typically uses medium pulse repetition frequency (PRF) from about 3 kHz to 30 kHz. The range between transmit pulses is 5 km to 50 km.
Range and velocity cannot be measured directly using medium PRF, and ambiguity resolution is required to identify true range and speed. Doppler signals are generally above 1 kHz, which is audible, so audio signals from medium-PRF systems can be used for passive target classification.
Radar systems require angular measurement. Transponders are not normally associated with pulse-Doppler radar, so sidelobe suppression is required for practical operation. [ 11 ] [ 12 ]
Tracking radar systems use angle error to improve accuracy by producing measurements perpendicular to the radar antenna beam. Angular measurements are averaged over a span of time and combined with radial movement to develop information suitable to predict target position for a short time into the future.
The two angle error techniques used with tracking radar are monopulse and conical scan .
Pulse-Doppler radar requires a coherent oscillator with very little noise. Phase noise reduces sub-clutter visibility performance by producing apparent motion on stationary objects.
Cavity magnetron and crossed-field amplifier are not appropriate because noise introduced by these devices interfere with detection performance. The only amplification devices suitable for pulse-Doppler are klystron , traveling wave tube , and solid state devices.
Pulse-Doppler signal processing introduces a phenomenon called scalloping. The name is associated with a series of holes that are scooped-out of the detection performance.
Scalloping for pulse-Doppler radar involves blind velocities created by the clutter rejection filter. Every volume of space must be scanned using 3 or more different PRF. A two PRF detection scheme will have detection gaps with a pattern of discrete ranges, each of which has a blind velocity.
Ringing artifacts pose a problem with search, detection, and ambiguity resolution in pulse-Doppler radar.
Ringing is reduced in two ways.
First, the shape of the transmit pulse is adjusted to smooth the leading edge and trailing edge so that RF power is increased and decreased without an abrupt change. This creates a transmit pulse with smooth ends instead of a square wave, which reduces ringing phenomenon that is otherwise associated with target reflection.
Second, the shape of the receive pulse is adjusted using a window function that minimizes ringing that occurs any time pulses are applied to a filter. In a digital system, this adjusts the phase and/or amplitude of each sample before it is applied to the fast Fourier transform . The Dolph-Chebyshev window is the most effective because it produces a flat processing floor with no ringing that would otherwise cause false alarms. [ 13 ]
Pulse-Doppler radar is generally limited to mechanically aimed antennas and active phased arrays.
Mechanical RF components, such as wave-guide, can produce Doppler modulation due to phase shift induced by vibration. This introduces a requirement to perform full spectrum operational tests using shake tables that can produce high power mechanical vibration across all anticipated audio frequencies.
Doppler is incompatible with most electronically steered phased-array antenna. This is because the phase-shifter elements in the antenna are non-reciprocal and the phase shift must be adjusted before and after each transmit pulse. Spurious phase shift is produced by the sudden impulse of the phase shift, and settling during the receive period between transmit pulses places Doppler modulation onto stationary clutter. That receive modulation corrupts the measure of performance for sub-clutter visibility. Phase shifter settling time on the order of 50ns is required. Start of receiver sampling needs to be postponed at least 1 phase-shifter settling time-constant (or more) for each 20 dB of sub-clutter visibility.
Most antenna phase shifters operating at PRF above 1 kHz introduce spurious phase shift unless special provisions are made, such as reducing phase shifter settling time to a few dozen nanoseconds. [ 14 ]
The following gives the maximum permissible settling time for antenna phase shift modules . T = 1 e SCV 20 × S × PRF , {\displaystyle T={\frac {1}{e^{\frac {\text{SCV}}{20}}\times S\times {\text{PRF}}}},} where
The antenna type and scan performance is a practical consideration for multi-mode radar systems.
Choppy surfaces, like waves and trees, form a diffraction grating suitable for bending microwave signals. Pulse-Doppler can be so sensitive that diffraction from mountains, buildings or wave tops can be used to detect fast moving objects otherwise blocked by solid obstruction along the line of sight. This is a very lossy phenomenon that only becomes possible when radar has significant excess sub-clutter visibility.
Refraction and ducting use transmit frequency at L-band or lower to extend the horizon, which is very different from diffraction. Refraction for over-the-horizon radar uses variable density in the air column above the surface of the earth to bend RF signals. An inversion layer can produce a transient troposphere duct that traps RF signals in a thin layer of air like a wave-guide.
Subclutter visibility involves the maximum ratio of clutter power to target power, which is proportional to dynamic range. This determines performance in heavy weather and near the earth surface. dynamic range = min { carrier power noise power transmit noise, where bandwidth is PRF filter size 2 sample bits + filter size receiver dynamic range . {\displaystyle {\text{dynamic range}}=\min {\begin{cases}{\tfrac {\text{carrier power}}{\text{noise power}}}&{\text{transmit noise, where bandwidth is }}{\tfrac {\text{PRF}}{\text{filter size}}}\\2^{{\text{sample bits}}+{\text{filter size}}}&{\text{receiver dynamic range}}\end{cases}}.}
Subclutter visibility is the ratio of the smallest signal that can be detected in the presence of a larger signal. subclutter visibility = dynamic range CFAR detection threshold . {\displaystyle {\text{subclutter visibility}}={\frac {\text{dynamic range}}{\text{CFAR detection threshold}}}.}
A small fast-moving target reflection can be detected in the presence of larger slow-moving clutter reflections when the following is true: target power > clutter power subclutter visibility . {\displaystyle {\text{target power}}>{\frac {\text{clutter power}}{\text{subclutter visibility}}}.}
The pulse-Doppler radar equation can be used to understand trade-offs between different design constraints, like power consumption, detection range, and microwave safety hazards. This is a very simple form of modeling that allows performance to be evaluated in a sterile environment.
The theoretical range performance is as follows.
where
This equation is derived by combining the radar equation with the noise equation and accounting for in-band noise distribution across multiple detection filters. The value D is added to the standard radar range equation to account for both pulse-Doppler signal processing and transmitter FM noise reduction .
Detection range is increased proportional to the fourth root of the number of filters for a given power consumption. Alternatively, power consumption is reduced by the number of filters for a given detection range.
Pulse-Doppler signal processing integrates all of the energy from all of the individual reflected pulses that enter the filter. This means a pulse-Doppler signal processing system with 1024 elements provides 30.103 dB of improvement due to the type of signal processing that must be used with pulse-Doppler radar. The energy of all of the individual pulses from the object are added together by the filtering process.
Signal processing for a 1024-point filter improves performance by 30.103 dB, assuming compatible transmitter and antenna. This corresponds to 562% increase in maximal distance.
These improvements are the reason pulse-Doppler is essential for military and astronomy.
Pulse-Doppler radar for aircraft detection has two modes.
Scan mode involves frequency filtering, amplitude thresholding, and ambiguity resolution. Once a reflection has been detected and resolved , the pulse-Doppler radar automatically transitions to tracking mode for the volume of space surrounding the track.
Track mode works like a phase-locked loop , where Doppler velocity is compared with the range movement on successive scans. Lock indicates the difference between the two measurements is below a threshold, which can only occur with an object that satisfies Newtonian mechanics . Other types of electronic signals cannot produce a lock. Lock exists in no other type of radar.
The lock criterion needs to be satisfied during normal operation. [ 15 ]
Lock eliminates the need for human intervention with the exception of helicopters and electronic jamming .
Weather phenomenon obey adiabatic process associated with air mass and not Newtonian mechanics , so the lock criterion is not normally used for weather radar.
Pulse-Doppler signal processing selectively excludes low-velocity reflections so that no detections occurs below a threshold velocity. This eliminates terrain, weather, biologicals, and mechanical jamming with the exception of decoy aircraft.
The target Doppler signal from the detection is converted from frequency domain back into time domain sound for the operator in track mode on some radar systems. The operator uses this sound for passive target classification, such as recognizing helicopters and electronic jamming.
Special consideration is required for aircraft with large moving parts because pulse-Doppler radar operates like a phase-locked loop . Blade tips moving near the speed of sound produce the only signal that can be detected when a helicopter is moving slow near terrain and weather.
A helicopter appears like a rapidly pulsing noise emitter except in a clear environment free from clutter. An audible signal is produced for passive identification of the type of airborne object. Microwave Doppler frequency shift produced by reflector motion falls into the audible sound range for human beings ( 20–20 000 Hz ), which is used for target classification in addition to the kinds of conventional radar display used for that purpose, like A-scope, B-scope, C-scope, and RHI indicator. The human ear may be able to tell the difference better than electronic equipment.
A special mode is required because the Doppler velocity feedback information must be unlinked from radial movement so that the system can transition from scan to track with no lock.
Similar techniques are required to develop track information for jamming signals and interference that cannot satisfy the lock criterion.
Pulse-Doppler radar must be multi-mode to handle aircraft turning and crossing trajectory.
Once in track mode, pulse-Doppler radar must include a way to modify Doppler filtering for the volume of space surrounding a track when radial velocity falls below the minimum detection velocity. Doppler filter adjustment must be linked with a radar track function to automatically adjust Doppler rejection speed within the volume of space surrounding the track.
Tracking will cease without this feature because the target signal will otherwise be rejected by the Doppler filter when radial velocity approaches zero because there is no change in frequency.
Multi-mode operation may also include continuous wave illumination for semi-active radar homing . | https://en.wikipedia.org/wiki/Pulse-Doppler_radar |
Pulse-chase analysis (PCA) is used to study the life cycles of proteins. Pulse-chase analysis experiments use radioactive and cytotoxic labels to "tag" proteins. Commonly used methods include treating cells with cycloheximide (CHX) to stop protein synthesis or radioisotopic amino acids or proteins such as green fluorescent protein (GFP ). These labels are used to study proteins through their life cycles. [ 1 ]
While pulse-chase analysis is mainly used to study proteins, it can also be used to study different molecular structures that interact with proteins. Proteins can interact with different structures either because they are incorporated into the structure, such as in cells, or because they are part of a larger structure, such as in macromolecules.
In biochemistry and molecular biology, a pulse-chase analysis is a method for examining a cellular process occurring over time by successively exposing the cells to a labeled compound (pulse) and then to the same compound in an unlabeled form (chase). [ 2 ]
Pulse-chase experiments are divided into two parts- a "pulse" and a "chase." In the "pulse" part of the experiment, the proteome of cells are labelled with radioactive amino acids . In the "chase" part of the experiment, cells are stopped from taking up amino acids.
To start a pulse-chase experiment, cells are grown in presence of radioactive amino acids. This is done for that cells will uptake the amino acids into their proteomes. When researchers want to study the life of the cell (e.g. folding, transport, degradation), researchers start the "chase" part of the experiment. To initiate chase, cells are put in presence of a nonradioactive isotope to stop uptake of radioactive isotope. Researchers then study the time of the chase while the protein is in the process of interest. [ 3 ]
A selected cell or a group of cells is first exposed to a labeled compound (the pulse) that is to be incorporated into a molecule or system that is studied (also see pulse labeling ). The compound then goes through the metabolic pathways and is used in the synthesis of the product studied. For example, a radioactively labeled form of leucine ( 3 H-leucine) can be supplied to a group of pancreatic beta cells , which then uses this amino acid in insulin synthesis.
Various other experimental techniques can be used to supplement pulse-chase analysis. These include cell staining , immunoprecipitation , and SDS-PAGE .
Shortly after introduction of the labeled compound (usually about 5 minutes, but the actual time needed is dependent on the object studied), excess of the same, but unlabeled, substance (the chase) is introduced into the environment. Following the previous example, the production of insulin would continue, but it would no longer contain the radioactive leucine introduced in the pulse phase and would not be visible using radioactive detection methods. However, the movement of the labeled insulin produced during the pulse period could still be tracked within the cell. [ 4 ]
PCA uses radioactive materials to label proteins in the "pulse" part of the experiment and cytotoxic materials in the "chase" to stop protein synthesis. This is hazardous to the cells that are used during experiments. Researchers have developed various methods to use materials that are not toxic or radioactive. Examples of this include using L-azidohomoalanine (AHA) and 4sU. In the example of AHA, AHA is used to label proteins. AHA then reacts with an alkyne group to isolate AHA-labelled proteins. In this method, mammalian cells are washed with 0.5%-SDS RIPA buffer and PBS, and half-life is calculated with half-life and exponential decay formulas. Protein misfolding and heat shock were induced in cells, and cells were then subject to pulse-chase analysis, SDS-PAGE, and immunoblotting to determine protein behavior. AHA is shown to be a suitable alternative to radioactive and cytotoxic materials. It has comparable results to radioactive pulse-chase analysis; the only difference detected was when using mammalian cells, as AHA was shown to possibly alter heat shock response. Cells can be studied further by studying the proteins used, post-translational modifications , and heat shock to determine cell viability. [ 1 ] [ 5 ]
Pulse-chase analysis is also used with 4sU. miRNAs are used in post-transcriptional gene regulation , and play a large role in the cell cycle. Although miRNA is a large part of post-translational modifications, not much is known about how it degrades. When comparing initial amount of miRNA in a PCA versus at the end of the experiment, there is a significant decrease in the amount of miRNA. Although miRNA is structurally stable, indications of degradation and decay are found through determining the half-life of miRNA. In this PCA, miRNA was tagged with 4sU and were separated based on their "age" or whether they were pri-miRNAs or mature miRNAs . With 4sU being the label, mature miRNAs were separated from pri-mRNAs after protein processing based on the amount of degradation on labels. The efficiency of miRNAs can also be determined through pulse-chase analysis by comparing transcription rates between pri-miRNA and mature mRNA. [ 1 ] [ 5 ]
In PCA experiments, proteins kinetics are interpreted by studying the length of a chase. While proteins degradation often follows exponential models of decay, problems in predicting decay curves occur when degradation does not follow an exponential model. Proteins can degrade over time without external factors, but cytotoxic and radioactive materials used in pulse-chase experiments increase the rate of degradation. Decay patterns are determined from the amount of degraded protein at the end of a chase. For this reason, accurate degradation is important to determining decay rate. To account for non-exponential decay patterns, pulse length and probabilities of molecular decay are taken into consideration. After experimental data is collected, decay rates are shown using Markov chains . Markov chains are statistical methods to determine the probability of an event. In PCA, Markov chains are used to predict the lifetime of a molecule, the age-dependent decay rate, and accurate pulse length. [ 6 ]
Cell division orientations can be determined using PCA and cell staining. Cell staining is used in conjunction with PCA to support and give visual data on hypothesized cell processes. Different parts of cells, such as the spindle, are stained in order to image them at different stages of the cell cycle. This information is used with data for corresponding "pulse" and "chase" periods. In this particular experiment, 5-ethynyl-2'-deoxyuridine (EdU) is used to label Arabidopsis Thaliana leaf cells. Cells nuclei were stained and the staining and pulse-chase signals were compared. The division angle of the cells were used to determine the cell stages the direction of cell division. [ 7 ]
The morphology of cell organelles can be studied when using PCA to determine the stage of the cell's life cycle. Cells can be stained with various fluorescent cell stains such as DAPI , Propidium Iodide , or Hoechst in order to image dead cells or cell nuclei. In this experiment, immunolabeling and fluorescent staining are used in conjunction with one another. Cells are labelled with EdU. The experiment was done on plant cells- specifically, root cells of Arabidopsis Thaliana . Cells were isolated from plants, immunolabelled with EdU, and stained with fluorescent dyes. The comparison of imaging cells with DAPI and the PCA results showed that amounts of Golgi Apparatus increased during the late G1 part of the cell cycle . [ 8 ]
Immunoprecipitation can be used to determine the proteins present in a solution. This is done by breaking down the different components of a macromolecule the protein is part of. When conducting PCA, it is used to recover a previously labelled protein and determine where a protein is in the process of protein processing. The protein macromolecules are treated with endoglycosidases to break them down and run through SDS-PAGE to determine the identity of the individual proteins. This information of the unique proteins present is used to determine the mechanisms of a protein at a certain time. [ 3 ] Immunoprecipitation is not always used, however, due to concerns of misidentifying labelled and unlabelled proteins. [ 9 ]
When conducting a PCA experiment, cells are washed of extracellular debris and materials that could interfere or create noise when isolating the protein of interest. However, when the materials in an experiment are washed, this can take upwards of 15 minutes. When trying to examine specific stages of biological processes, such as cell cycles or protein degradation , increased time used to wash cells could potentially interfere with the results of an experiment. Pulse-chase experiments can be modified in order to not require washing. Researchers have developed a way to used fluorescent proteins and fluorescence resonance energy transfer (FRET) to label proteins. Nucleophilic enzymes were labelled and instead of washing, noise-interfering compounds were removed through elimination reactions . This approach to PCA can shed light on time-dependent protein reactions such as the speed of protein expression or movement. [ 10 ]
This method is useful for determining the activity of certain cells over a prolonged period of time. The method has been used to study protein kinase C , ubiquitin , and many other proteins. The method was also used to prove the existence and function of Okazaki fragments . George Palade used pulse-chase of radioactive amino acids to elucidate the secretory pathway . [ 11 ] [ 12 ]
PCA is used in several ways for biomedical research - it uses proteins to study the lifespans and of structures that interact with proteins. Examples of this include stem cells, procollagen, cell turnover, and viral inclusions.
Pulse-chase analysis can be used to determine the rate of division of stem cells. Stem cells are different from regular cells in that they rarely divide. Researchers have used pulse-chase analysis in the kidney cells of mice and rats in order to study the cell cycle and cell life length of stem cells. Different pulse lengths were used to determine whether a label-retaining cell (LRC) was a stem cell or not. Cells that retained their tags for longer periods of time and were "slow-cycling" were hypothesized to be stem cells. Pulses were also used to detect DNA analog labels, and to detect the number of cell divisions they had gone through by using the cells' half-life . [ 13 ]
Cell turnover rates can be monitored through pulse-chase analysis. Researchers use a combination of pulse-chase analysis and laser ablation electrospray ionization mass spectrometry (LAESI-MS) in order to mechanically increase cell turnover rate. Amino acids were used to label photosynthetic green algae and Chlamydomonas reinhardtii cells. Pulse-chase analysis was used to track the isotopic amino acid labels. This was compared to photosynthesis rates in cells, contributing to knowledge of cell life cycle. [ 14 ]
Procollagen is studied because it plays a role in diseases related to connective tissues. Procollagen interacts with collagen and proteins, and it travels through various organelles such as the endoplasmic reticulum and the golgi apparatus. PCA is used to study the formation of procollagen and the disruption of its formation. PCA is done by using non-radioactive cells in order to not potentially disturb the cells' function. This experiment also uses AHA to avoid damage to DNA. Human cells were used to extract procollagen from cells. Collagen was labelled with fluorescent dyes . In addition to staining and pulse-chase labelling, RNA was also isolated from cells to perform RT-qPCR to determine the amount of mRNA and collagen at different points of time. [ 15 ] [ 16 ]
Pulse-Chase Analysis can be used to determine how viruses replicate and enter the cell. Researchers use Fluorescence Loss After Photoactivation (FLAPh) in order to study Influenza A virus (IAV) in order to study the cell structure during infection. As opposed to fluorescence recovery after photobleaching (FRAP), FLAPh can determine mobile elements. Viral infections are constantly moving and require a method that can image it while it is moving. FLAPh photobleaches protein components for only short periods of time, making it a suitable method to determine the location of protein and cell components. This was combined with pulse-chase analysis, and researchers used both visual methods to determine the protein decay and location of the virus in the cell through stages of viral infection. [ 17 ]
PCA can be used to determine the structure of molecules that interact with proteins. This can be done by labelling a protein and analyzing the length of a chase in an experiment, and it can also be done by analyzing amounts concentrations of proteins after a pulse-chase experiment.
PCA can be used to determine the structure of bacteriophages . Researchers have used pulse-chase analysis and looked at the delay between the "pulse" and "chase" in order to determine the structure and making of the tail of the bacteriophage T4. In this experiment, amino acids were used to label proteins of the bacteriophage. The phage was made up of 420 proteins, and the position of the tail proteins were determined by the length of the chase. The "chase" of the experiment observed the length of time it took for the protein to assemble in the bacteriophage. The more "inward" in a protein a label has travel, the longer the chase will be. The proteins that were labelled were identified through gel electrophoresis . This procedure was repeated until the order of proteins on the bacteriophage tail was determined. [ 18 ]
Proteins are studied using PCA to determine how they interact with a larger macromolecular structure. This is done through using an amino acid to start translation. A labelled protein is attached to mRNA, and the protein is followed through translation. After translation has stopped, the protein is chased after translation as well to determine the post-translational behavior. The labels can be used to isolate proteins, and the proteins can be washed and analyzed using immunoprecipitation to identify protein macromolecule complexes and other translational materials. [ 9 ] | https://en.wikipedia.org/wiki/Pulse-chase_analysis |
In physics , a pulse is a generic term describing a single disturbance that moves through a transmission medium . This medium may be vacuum (in the case of electromagnetic radiation ) or matter , and may be indefinitely large or finite.
Consider a pulse moving through a medium - perhaps through a rope or a slinky . When the pulse reaches the end of that medium, what happens to it depends on whether the medium is fixed in space or free to move at its end. For example, if the pulse is moving through a rope and the end of the rope is held firmly by a person, then it is said that the pulse is approaching a fixed end. On the other hand, if the end of the rope is fixed to a stick such that it is free to move up or down along the stick when the pulse reaches its end, then it is said that the pulse is approaching a free end.
A pulse will reflect off a free end and return with the same direction of displacement that it had before reflection. That is, a pulse with an upward displacement will reflect off the end and return with an upward displacement.
This is illustrated by figures 1 and 2 that were obtained by the numerical integration of the wave equation .
A pulse will reflect off a fixed end and return with the opposite direction of displacement. In this case, the pulse is said to have inverted. That is, a pulse with an upward displacement will reflect off the end and return with a downward displacement.
This is illustrated by figures 3 and 4 that were obtained by the numerical integration of the wave equation . In addition it is illustrated in the animation of figure 5.
When there exists a pulse in a medium that is connected to another less heavy or less dense medium, the pulse will reflect as if it were approaching a free end (no inversion). Contrarily, when a pulse is traveling through a medium connected to a heavier or denser medium, the pulse will reflect as if it were approaching a fixed end (inversion).
Dark pulses [ 1 ] are characterized by being formed from a localized reduction of intensity compared to a more intense continuous wave background. Scalar dark solitons (linearly polarized dark solitons) can be formed in all normal dispersion fiber lasers mode-locked by the nonlinear polarization rotation method and can be rather stable. Vector dark solitons [ 2 ] [ 3 ] are much less stable due to the cross-interaction between the two polarization components. Therefore, it is interesting to investigate how the polarization state of these two polarization components evolves.
In 2008, the first dark pulse laser was reported in a quantum dot diode laser with a saturable absorber. [ 4 ]
In 2009, the dark pulse fiber laser was successfully achieved in an all-normal dispersion erbium-doped fiber laser with a polarizer in cavity. Experimentation has revealed that apart from the bright pulse emission, under appropriate conditions the fiber laser could also emit single or multiple dark pulses. Based on numerical simulations, the dark pulse formation in the laser is a result of dark soliton shaping. [ 5 ]
In 2022, the first free space dark pulse laser using a nonlinear crystal inside of a solid state laser demonstrated. [ 6 ] | https://en.wikipedia.org/wiki/Pulse_(physics) |
Pulse electrolysis is an alternate electrolysis method that utilises a pulsed direct current to initiate non-spontaneous chemical reactions . [ 1 ] [ 2 ] [ 3 ] Also known as pulsed direct current (PDC) electrolysis, the increased number of variables that it introduces to the electrolysis method can change the application of the current to the electrodes and the resulting outcome. [ 4 ] [ 5 ] This varies from direct current (DC) electrolysis, which only allows the variation of one value, the voltage applied. By utilising conventional pulse width modulation (PMW), multiple dependent variables can be altered, including the type of waveform, typically a rectangular pulse wave , the duty cycle , and the frequency.
Currently, there has been a focus on theoretical and experimental research into PDC electrolysis in terms of the electrolysis of water to produce hydrogen . Claims have been made that there is a possibility it can result in a higher electrical efficiency in comparison to DC water electrolysis, but past research has shown this is not the case. [ 5 ] The varying voltage and current added on top of the DC cause additional energy consumption with no effect on the hydrogen production. [ 6 ] Because of the increasing energy consumption, attempts to replicate claimed benefits experimentally have not succeeded, and have found negative effects on the electrolyser longevity instead. [ 7 ]
PDC electrolysis is not only confined to the electrolysis of water. Uses in industry such as electroplating and electrocrystallisation are also undergoing research due to the wider range of properties that can be achieved. [ 8 ]
The various and alterable effects of using intermittent pulses in PDC electrolysis has resulted in an area of interest that could benefit industry. However, as it is still being researched and has produced conflicting results, a consistent and reliable answer to how dependent electrolysis efficiency is on the properties of an electrical pulse has not been determined, [ 4 ] hence, other forms of electrolysis such as polymer electrolyte membrane and alkaline water electrolysis are being used in industry.
PDC electrolysis was first considered theoretically in 1952, [ 9 ] and experimental research began as early as 1960 however it was originally focused on its technical applications to industry and the possibilities of improving the quality and rate of metal deposition. [ 10 ] It partially succeeded, providing promising results its ability to create smoother, denser deposits, and reducing the amount of metal required in electroplating. [ 8 ]
The first instance it was considered to initialise the electrolysis of water was from the perspective of magnetolysis in 1985, where high strength magnets, or in this case electromagnets , are used in conjunction with homopolar propellers . [ 11 ] Ghoroghichian and Bockris conducted this experimental research to determine how a pulsed current can impact the rate of hydrogen production and provide economic advantages. A current density ratio of 2.07 was observed, demonstrating, for the first time, that a pulsed current can double the production of hydrogen, in comparison to a steady state current. [ 12 ]
Since hydrogen gas cannot be collected in its free form, and it can be used to provide a source of renewable and clean energy through fuel cells , [ 13 ] [ 14 ] discovering an electrolysis method with the greatest efficiency is valued. With early experimental and theoretical success, many patents began to be developed until as recent as 2002, [ citation needed ] but since 1985, it has only been researched intermittently with varying levels of success. [ 15 ]
With the perspective that the current use of non-renewable fuel sources is a main cause of global environmental problems, [ 9 ] hydrogen is being viewed as a possible renewable fuel source replacement. [ 13 ] For this to be feasible, the production of hydrogen, through methods such as electrolysis, must be efficient in terms of the energy, cost and time required. [ 15 ] Whilst multiple methods of pulse electrolysis have been studied, and experimental results are mixed, the underlying theory behind this experimental approach seems to remain consistent. [ 15 ]
When a voltage is applied to an electrolysis cell, immediately following this an Electric Double Layer (EDL), or a diffusion layer , is theoretically formed. This can create a capacitance , or can cause the electrolyser to act as a capacitor. [ 15 ] When this is present, excess voltage must be supplied by the direct current to compensate for the loss in the 'capacitor', [ 16 ] which rises the required voltage supplied to what is called the thermo-neutral voltage . [ 4 ] One of the aims of PDC electrolysis is to overcome this, and theoretically, when the PMW switches the current on, a capacitance will be stored, and when the duty cycle is over, it will be released, continuing the flow of current whilst reducing the EDL that is formed. [ 4 ]
Poláčik and Pospíšil believe that by manipulating the dependent variables, such as the duty cycle, can increase or decrease the effectiveness of pulse electrolysis at reducing this layer. [ 4 ] A theoretical equation, the Sand equation, is used to calculate the amount of time required to allow the EDL to fall to zero, and allow PDC electrolysis to achieve its highest efficiencies. [ 17 ]
Electrolysers require high currents produced by very low voltages. [ 12 ] [ 18 ] A homopolar generator has the ability to do this, so in Bockris and Ghoroghchian's original experiment in 1985, they followed Faraday's idea. Using a magnetic field of 0.86T produced by permanent magnets , they placed a stainless-steel disc in between. The disc needed a rotation speed of 2000 rpm to reach the correct electrical potential for electrolysis. The difference between Faraday's original model and Bockris and Ghorogchian's is that their disc will only rotate when it is in contact with an electrolyte. [ 12 ]
They encountered one large problem, a viscous force created by the electrolyte, that slowed down the motion of the disc. The two ways they could fix this is to rotate the disc and solution together or increase the magnetic field used. The latter being most practicable, the required magnetic field was calculated according to the power consumption rate or producing a cubic meter of hydrogen. It was discovered a magnetic field of 11T was needed for effective electrolysis, [ 12 ] more than 16 times greater than what was originally used. Since superconducting magnets would be required, and they can become too expensive to justify their use, ruling this out as a possible method.
Their final decision was to use a homopolar generator as an external source of power. This follows Faraday's method more closely.
In this method, a pulse potential was created to take advantage of previous studies that give an effectiveness factor of 2 when either a nickel electrode [ 12 ] or a Teflon-bonded platinum electrode was used. [ 17 ]
The generator was constructed with a magnetic flux density of 0.6T, a propeller radius of 30 cm and a loop coated with copper strips. [ 12 ] To increase the output potential, and reducing the rotation speed required, these were connected in series. Pulses of 2-3V that were sustained for 1ms were achieved. [ 12 ]
This was the first instance of a successful application of pulse electrolysis for the production of hydrogen. However, it still presents its own limitations in the possibility for it to be used in industry.
A comparison between a pulsed and non-pulsed dc current electrolysers was explored in 1993 by Shaaban, that demonstrated a non-pulsed current used the least electrical power. [ 5 ]
The experimental electrolyser separated the anolyte and catholyte compartments and used a 324-Naflon membrane to allow the ion exchange. The distance between the anode, made with platinum coated titanium, and the cathode, stainless steel, was 3mm and was immersed in a 10 weight percent sulfuric acid electrolyte. He conducted tests under several different frequencies that included '0.01 Hz, 0.5 kHz, 5 kHz, i kHz, 10 kHz, 25 kHz, and 40 kHz' and with four duty cycles, '10, 25, 50, and 80%'. [ 5 ]
Initial observations revealed that the off-period resulted in a reversal in polarity, causing the reaction to reverse. This effected the cathode, which displayed a 2g loss after experimentation. [ 5 ] A diode was input into the circuit to rectify the polarity. However, the cell was prevented from dropping to 0 V during the off-period, maintaining a higher value of 2.3V. This further impacted the experiment, distorting the square wave produced by the function generator Shaaban used, as the electrical potential provided needed to overcome the cell voltage of 2.3V before current could flow. [ 5 ] Bokris et al. records that current would continue to flow, discharging ions from the EDL, but this was contradicted in this experiment. [ 9 ] This only occurred when the diode was in place but it prevented a current spike in the duty cycle as well.
With a 10% duty cycle at a 1 kHz pulse, temperature increases of nearly 7 °C greater than in the non-pulsed experimental electrolysis, were found. [ 5 ] Temperature increases can prevent the circuit
Calculating the power consumption, it was determined a non-pulsed current had power demand losses of 3.5%, and a pulsed current resulted in 13 - 16% losses. [ 5 ] It also opposes the idea from Bockris et al. that the effectiveness of non-pulsed dc current electrolysis increases by a factor of 2 when a pulsed current is applied. [ 12 ]
The possible increased effect a pulsed current will have on the corrodibility of metals was first looked at by de la Rive in 1837. [ 19 ] It was investigated around 60 years later by Coehn regarding the effect of a current with a rectangular waveform, on the plating of zinc deposits, resulting in a successful application for a patent. [ 20 ] [ 21 ] A full review on using PDC electrolysis in electroplating , also known as electrodeposition or 'pulse plating ', was only published in 1954 by Baeyens, this being the first area of research into the use of pulse electrolysis in industry. [ 20 ] [ 22 ]
A pulsed current can be varied in many ways that increases the possible outcomes and can vary the properties of deposited metals during electroplating. [ 4 ] [ 5 ] [ 22 ] Hansel and Roy, in their review of the third European Pulse Plating Seminar, concluded that each deposition system must have a unique sequence developed in order to optimise the process and gain the desired results, opposing the inability of traditional plating to be as freely tailored to a situation. [ 23 ] The nucleation and crystallisation of the deposition metal is directly affected and can have favourable or unfavourable circumstances if specific conditions are not met. [ 23 ] It is reported that pulse plating can encourage nucleation causing grain refinement, and reducing grain size, as well as increasing the deposit density that can improve micro hardness. [ 23 ] [ 24 ]
These effects were first researched on zinc by Coehn. [ 21 ] It was discovered a pulsed current at a high frequency can produce deposits of higher quality, with properties ranging from a smoother finish by the reduction in grain size, [ 22 ] [ 25 ] as well as lowering its corrosion rate. [ 24 ] This is beneficial as it is mainly used as a sacrificial anode in industry. [ 25 ]
In theoretical electrolysis of water, a voltage of only 1.23 V is required to split water into hydrogen and oxygen. The formation of an EDL increases this to its thermo-neutral voltage of 1.45 V. It is claimed that minimising the EDL formed during pulse electrolysis is advantageous, as it can reduce the thermo-neutral voltage and the energy input required, increasing energy efficiency. However, this claim follows from a misconseption regarding energy consumption in the system when varying current and voltage waveforms are applied. The hydrogen production rate in the process is determined by the mean of the current waveform, according to the Faraday's law of electrolysis , but the mean of the voltage waveform is not sufficient to evaluate the rate of energy consumption. Instead, the mean of the product of instantaneous current and voltage should be assessed, [ 26 ] revealing increased energy consumption due to the alternating current and voltage waveforms, in comparison to DC water electrolysis with an equal hydrogen production rate.
Whilst the method of PDC electrolysis has been claimed by Ghoroghichian and Bockris in 1952 and 1985 to work extremely well in theory, it is difficult to replicate with consistently positive results in practical experimentation. As further research about the dynamic operation of water electrolysis have found only negative impact from alternating the current and voltage supplied to the system, both from energetical [ 6 ] and longetivity [ 7 ] point of view, the claimed benefits of pulsed electrolysis might not have basis in reality. The energy consumption of a system with only positive resistance (cf. negative resistance ) can only increase as a function of current and voltage amplitude.
According to Shabaan, during the pulse-off period, if the electrolytic cell is not constructed properly, the current polarity can reverse. This can cause the cathode to deteriorate. [ 5 ] In electrolysis, the cathode is where the reduction of hydrogen occurs, forming the desired hydrogen gas. Any loss in mass can reduce the speed and effectiveness of the electrolytic reaction, reducing the overall efficiency of the pulse electrolysis method.
Shaaban also states that due to expected internal losses, such as through heat, the current density required will increase, which increases the required voltage. [ 27 ] As a result, greater over potentials are needed that further converts to heat. [ 5 ] | https://en.wikipedia.org/wiki/Pulse_electrolysis |
Pulse labelling is a biochemistry technique of identifying the presence of a target molecule by labeling a sample with a radioactive compound. This is mainly done to identify the stage at which the messenger RNA is being produced in a cell . [ 1 ]
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pulse_labelling |
In Fourier transform NMR spectroscopy and imaging , a pulse sequence describes a series of radio frequency pulses applied to the sample, such that the free induction decay is related to the characteristic frequencies of the desired signals. After applying a Fourier transform , the signal can be represented in the frequency domain as the NMR spectrum . In magnetic resonance imaging , additional gradient pulses are applied by switching magnetic fields that exhibit a space-dependent gradient which can be used to reconstruct spatially resolved images after applying Fourier transforms. [ 2 ]
The outcome of pulse sequences is often analyzed using the product operator formalism .
This nuclear magnetic resonance –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pulse_sequence |
The pulse vaccination strategy is a method used to eradicate an epidemic by repeatedly vaccinating a group at risk, over a defined age range, until the spread of the pathogen has been stopped. It is most commonly used during measles and polio epidemics to quickly stop the spread and contain the outbreak. [ 1 ] [ 2 ]
Where T= time units is a constant fraction p of susceptible subjects vaccinated in a relatively short time. This yields the differential equations for the susceptible and vaccinated subjects as [ citation needed ]
Further, by setting I = 0 , one obtains that the dynamics of the susceptible subjects is given by: [ 3 ]
and that the eradication condition is: [ 4 ] | https://en.wikipedia.org/wiki/Pulse_vaccination_strategy |
Pulsed-field gel electrophoresis ( PFGE ) is a technique used for the separation of large DNA molecules by applying an electric field that periodically changes direction to a gel matrix. [ 1 ] [ 2 ] Unlike standard agarose gel electrophoresis , which can separate DNA fragments of up to 50 kb, PFGE resolves fragments up to 10 Mb. [ 1 ] This allows for the direct analysis of genomic DNA. [ 2 ]
In 1984, David C. Schwartz and Charles Cantor published the first successful application of alternating electric fields for the separation of large DNA molecules. [ 3 ] [ 4 ] This technique, which they named PFGE, resulted in the development of several variations, including Orthogonal Field Alternation Gel Electrophoresis (OFAGE), Transverse Alternating Field Electrophoresis (TAFE), Field-Inversion Gel Electrophoresis (FIGE), and Clamped Homogeneous Electric Fields (CHEF), among others. [ 3 ]
The procedure for PFGE is similar to that of standard agarose gel electrophoresis , with the main exception being the application of the electric current. Generally, in PFGE electrophoresis chambers, the voltage periodically switches between three directions: one along the central axis, and two at a 60 degree angle along each side. [ 5 ] The application of the voltage can change depending on the variation of PFGE used. [ 6 ] [ 7 ]
PFGE may be used for genotyping or genetic fingerprinting . It has commonly been considered a gold standard in epidemiological studies of pathogenic organisms for several decades. For instance, subtyping bacterial isolates with this method has made it easier to discriminate among strains of Listeria monocytogenes , Lactococcus garvieae [ 8 ] and some clinical isolates of Bacillus cereus [ 9 ] group isolated from diseases aquatic organisms and thus to link environmental or food isolates with clinical infections. It is now in the process of being superseded by next generation sequencing methods. [ 10 ] | https://en.wikipedia.org/wiki/Pulsed-field_gel_electrophoresis |
Pulsed-power water treatment is the process of using pulsed , electro-magnetic fields into cooling water to control scaling , biological growth, and corrosion. The process does not require the use of chemicals and helps eliminate environmental and health issues associated with the use and life-cycle management of chemicals used to treat water. [ 1 ] Pulsed-power systems have the ability to maintain low levels of microbiological activity without using corrosive chemicals. Several reports have shown that pulse-powered systems yield significantly lower counts of bacteria colony forming units compared to chemically controlled systems. [ 2 ]
Pulsed-power systems are used to control scale, corrosion and biological activity in cooling towers without the use of chemicals, chemical tanks or pumps. [ 1 ] Pulsed-power has been used as the sole source of water treatment in cooling systems for over a decade now with good results. [ 3 ] The pulsed-power imparts electromagnetic fields into the cooling water and the induced fields have a direct effect in preventing mineral scale formation on equipment surfaces and controlling microbial populations to very low levels while also significantly reducing biofilms present in cooling systems. [ 4 ]
Pulsed-power is also an FDA approved method for pasteurizing fluids such as fruit juices. However, the energy needed for pasteurization is 100 times that of a pulsed-power water treatment system. [ 5 ]
Pulsed-power treatment enables chemical-free treatment of cooling tower water while providing lower bacterial contamination as it controls scale and corrosion. [ 3 ] The cost over the lifetime of use is lower than that of chemical treatment and also reduces the health concerns of handling chemicals. Cycles of concentration are typically increased which reduces blowdown water . [ 3 ] The resulting elimination of chemicals provides many benefits including reduced environment, health and safety risks, environmental benefits of reusing blowdown water and elimination of chemical-laden discharge water. Pulse-power treatment is less effective on water that is extremely soft or distilled, as the technology is based on changing the way minerals in the water precipitate. It also still requires energy to use. [ 1 ] | https://en.wikipedia.org/wiki/Pulsed-power_water_treatment |
In astronomy, Pulsed accretion is the periodic modulation in accretion rate of young stellar objects in binary systems , producing a periodic pulse in the observed infrared light curves of T Tauri stars . [ 1 ]
In double stars in young stellar objects , a protoplanetary disk is formed around each star, accreted from nearby matter. In such a binary star system, a strongly eccentric orbit produces strong gravitational forces on the circumstellar disks at periastron , and such disturbance can lead to a temporary increase in the accretion rate onto both stars. [ 2 ] Simulations show that the accretion rate is likely to be highly symmetric between stars in nearly equal-mass binary systems but for systems with a mass disparity can be asymmetric. Such asymmetry may be attributable to a high eccentricity circumbinary disk which can accrete material onto the surface of one star at a rate 10-20 times greater than onto the other, with the star that experiences a higher rate of accretion alternating with its companion over large time scales. [ 2 ]
This increased accretion rate leads to a change of intensity in the infrared radiation emitted by the stars with such intensity rising by up to tenfold in the protostar LRLL 54361 . Brightness changes in the light curve that have the same period as the orbital period of the binary system, are usually assumed to be due to pulsed accretion. [ 3 ] | https://en.wikipedia.org/wiki/Pulsed_accretion |
Pulsed columns are a type of liquid-liquid extraction equipment; [ 1 ] examples of this class of extraction equipment is used at the BNFL plant THORP .
Special use in nuclear industries for fuel reprocessing, where spent fuel from reactors is subjected to solvent extraction. A pulsation is created using air by a pulse leg. The feed is aqueous solution containing radioactive solutes, and the solvent used is TBP (Tri-Butyl Phosphate) in suitable hydrocarbon. To create turbulence for dispersion of one phase in other, a mechanical agitator is used in conventional equipments. But, because of radioactivity, and frequent maintenance required for mechanical agitators, pulsing is used in extraction columns.
This chemistry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pulsed_columns |
Pulsed electron paramagnetic resonance (EPR) is an electron paramagnetic resonance technique that involves the alignment of the net magnetization vector of the electron spins in a constant magnetic field . This alignment is perturbed by applying a short oscillating field, usually a microwave pulse. One can then measure the emitted microwave signal which is created by the sample magnetization. Fourier transformation of the microwave signal yields an EPR spectrum in the frequency domain. With a vast variety of pulse sequences it is possible to gain extensive knowledge on structural and dynamical properties of paramagnetic compounds. Pulsed EPR techniques such as electron spin echo envelope modulation (ESEEM) or pulsed electron nuclear double resonance (ENDOR) can reveal the interactions of the electron spin with its surrounding nuclear spins .
Electron paramagnetic resonance (EPR) or electron spin resonance (ESR) is a spectroscopic technique widely used in biology, chemistry, medicine and physics to study systems with one or more unpaired electrons. Because of the specific relation between the magnetic parameters, electronic wavefunction and the configuration of the surrounding non-zero spin nuclei, EPR and ENDOR provide information on the structure, dynamics and the spatial distribution of the paramagnetic species. However, these techniques are limited in spectral and time resolution when used with traditional continuous wave methods. This resolution can be improved in pulsed EPR by investigating interactions separately from each other via pulse sequences.
R. J. Blume reported the first electron spin echo in 1958, which came from a solution of sodium in ammonia at its boiling point, -33.8˚C. [ 1 ] A magnetic field of 0.62 mT was used requiring a frequency of 17.4 MHz. The first microwave electron spin echoes were reported in the same year by Gordon and Bowers using 23 GHz excitation of dopants in silicon . [ 2 ]
Much of the pioneering early pulsed EPR was conducted in the group of W. B. Mims at Bell Labs during the 1960s. In the first decade only a small number of groups worked the field, because of the expensive instrumentation, the lack of suitable microwave components and slow digital electronics. The first observation of electron spin echo envelope modulation (ESEEM) was made in 1961 by Mims, Nassau and McGee. [ 3 ] Pulsed electron nuclear double resonance (ENDOR) was invented in 1965 by Mims. [ 4 ] In this experiment, pulsed NMR transitions are detected with pulsed EPR. ESEEM and pulsed ENDOR continue to be important for studying nuclear spins coupled to electron spins.
In the 1980s, the upcoming of the first commercial pulsed EPR and ENDOR spectrometers in the X band frequency range, lead to a fast growth of the field. In the 1990s, parallel to the upcoming high-field EPR, pulsed EPR and ENDOR became a new fast advancing magnetic resonance spectroscopy tool and the first commercial pulsed EPR and ENDOR spectrometer at W band frequencies appeared on the market.
The basic principle of pulsed EPR and NMR is similar. Differences can be found in the relative size of the magnetic interactions and in the relaxation rates which are orders of magnitudes larger (faster) in EPR than NMR. A full description of the theory is given within the quantum mechanical formalism, but since the magnetization is being measured as a bulk property, a more intuitive picture can be obtained with a classical description. For a better understanding of the concept of pulsed EPR consider the effects on the magnetization vector in the laboratory frame as well as in the rotating frame . As the animation below shows, in the laboratory frame the static magnetic field B 0 is assumed to be parallel to the z-axis and the microwave field B 1 parallel to the x-axis. When an electron spin is placed in magnetic field it experiences a torque which causes its magnetic moment to precess around the magnetic field. The precession frequency is known as the Larmor frequency ω L . [ 5 ]
where γ is the gyromagnetic ratio and B 0 the magnetic field. The electron spins are characterized by two quantum mechanical states, one parallel and one antiparallel to B 0 . Because of the lower energy of the parallel state more electron spins can be found in this state according to the Boltzmann distribution . This imbalanced population results in a net magnetization, which is the vector sum of all magnetic moments in the sample, parallel to the z-axis and the magnetic field. To better comprehend the effects of the microwave field B 1 it is easier to move to the rotating frame.
EPR experiments usually use a microwave resonator designed to create a linearly polarized microwave field B 1 , perpendicular to the much stronger applied magnetic field B 0 . The rotating frame is fixed to the rotating B 1 components. First we assume to be on resonance with the precessing magnetization vector M 0 .
Therefore, the component of B 1 will appear stationary. In this frame also the precessing magnetization components appear to be stationary that leads to the disappearance of B 0 , and we need only to consider B 1 and M 0 . The M 0 vector is under the influence of the stationary field B 1 , leading to another precession of M 0 , this time around B 1 at the frequency ω 1 .
This angular frequency ω 1 is also called the Rabi frequency . Assuming B 1 to be parallel to the x-axis, the magnetization vector will rotate around the +x-axis in the zy-plane as long as the microwaves are applied. The angle by which M 0 is rotated is called the tip angle α and is given by:
Here t p is the duration for which B 1 is applied, also called the pulse length. The pulses are labeled by the rotation of M 0 which they cause and the direction from which they are coming from, since the microwaves can be phase-shifted from the x-axis on to the y-axis. For example, a +y π/2 pulse means that a B 1 field, which has been 90 degrees phase-shifted out of the +x into the +y direction, has rotated M 0 by a tip angle of π/2, hence the magnetization would end up along the –x-axis. That means the end position of the magnetization vector M 0 depends on the length, the magnitude and direction of the microwave pulse B 1 . In order to understand how the sample emits microwaves after the intense microwave pulse we need to go back to the laboratory frame. In the rotating frame and on resonance the magnetization appeared to be stationary along the x or y-axis after the pulse. In the laboratory frame it becomes a rotating magnetization in the x-y plane at the Larmor frequency. This rotation generates a signal which is maximized if the magnetization vector is exactly in the xy-plane. This microwave signal generated by the rotating magnetization vector is called free induction decay (FID). [ 6 ]
Another assumption we have made was the exact resonance condition, in which the Larmor frequency is equal to the microwave frequency. In reality EPR spectra have many different frequencies and not all of them can be exactly on resonance, therefore we need to take off-resonance effects into account. The off-resonance effects lead to three main consequences. The first consequence can be better understood in the rotating frame. A π/2 pulse leaves magnetization in the xy-plane, but since the microwave field (and therefore the rotating frame) do not have the same frequency as the precessing magnetization vector, the magnetization vector rotates in the xy-plane, either faster or slower than the microwave magnetic field B 1 . The rotation rate is governed by the frequency difference Δω.
If Δω is 0 then the microwave field rotates as fast as the magnetization vector and both appear to be stationary to each other. If Δω>0 then the magnetization rotates faster than the microwave field component in a counter-clockwise motion and if Δω<0 then the magnetization is slower and rotates clockwise. This means that the individual frequency components of the EPR spectrum, will appear as magnetization components rotating in the xy-plane with the rotation frequency Δω. The second consequence appears in the laboratory frame. Here B 1 tips the magnetization differently out of the z-axis, since B 0 does not disappear when not on resonance due to the precession of the magnetization vector at Δω. That means that the magnetization is now tipped by an effective magnetic field B eff , which originates from the vector sum of B 1 and B 0 . The magnetization is then tipped around B eff at a faster effective rate ω eff .
This leads directly to the third consequence that the magnetization can not be efficiently tipped into the xy-plane because B eff does not lie in the xy-plane, as B 1 does. The motion of the magnetization now defines a cone. That means as Δω becomes larger, the magnetization is tipped less effectively into the xy-plane, and the FID signal decreases. In broad EPR spectra where Δω > ω 1 it is not possible to tip all the magnetization into the xy-plane to generate a strong FID signal. This is why it is important to maximize ω 1 or minimize the π/2 pulse length for broad EPR signals.
So far the magnetization was tipped into the xy-plane and it remained there with the same magnitude. However, in reality the electron spins interact with their surroundings and the magnetization in the xy-plane will decay and eventually return to alignment with the z-axis. This relaxation process is described by the spin-lattice relaxation time T 1 , which is a characteristic time needed by the magnetization to return to the z-axis, and by the spin-spin relaxation time T 2 , which describes the vanishing time of the magnetization in the xy-plane. The spin-lattice relaxation results from the urge of the system to return to thermal equilibrium after it has been perturbed by the B 1 pulse. Return of the magnetization parallel to B 0 is achieved through interactions with the surroundings, that is spin-lattice relaxation. The corresponding relaxation time needs to be considered when extracting a signal from noise, where the experiment needs to be repeated several times, as quickly as possible. In order to repeat the experiment, one needs to wait until the magnetization along the z-axis has recovered, because if there is no magnetization in z direction, then there is nothing to tip into the xy-plane to create a significant signal.
The spin-spin relaxation time, also called the transverse relaxation time, is related to homogeneous and inhomogeneous broadening. An inhomogeneous broadening results from the fact that the different spins experience local magnetic field inhomogeneities (different surroundings) creating a large number of spin packets characterized by a distribution of Δω. As the net magnetization vector precesses, some spin packets slow down due to lower fields and others speed up due to higher fields leading to a fanning out of the magnetization vector that results in the decay of the EPR signal. The other packets contribute to the transverse magnetization decay due to the homogeneous broadening. In this process all the spin in one spin packet experience the same magnetic field and interact with each other that can lead to mutual and random spin flip-flops. These fluctuations contribute to a faster fanning out of the magnetization vector.
All the information about the frequency spectrum is encoded in the motion of the transverse magnetization. The frequency spectrum is reconstructed using the time behavior of the transverse magnetization made up of y- and x-axis components. It is convenient that these two can be treated as the real and imaginary components of a complex quantity and use the Fourier theory to transform the measured time domain signal into the frequency domain representation. This is possible because both the absorption (real) and the dispersion (imaginary) signals are detected.
The FID signal decays away and for very broad EPR spectra this decay is rather fast due to the inhomogeneous broadening. To obtain more information one can recover the disappeared signal with another microwave pulse to produce a Hahn echo . [ 7 ] After applying a π/2 pulse (90°), the magnetization vector is tipped into the xy-plane producing an FID signal. Different frequencies in the EPR spectrum (inhomogeneous broadening) cause this signal to "fan out", meaning that the slower spin-packets trail behind the faster ones. After a certain time t , a π pulse (180°) is applied to the system inverting the magnetization, and the fast spin-packets are then behind catching up with the slow spin-packets. A complete refocusing of the signal occurs then at time 2t . An accurate echo caused by a second microwave pulse can remove all inhomogeneous broadening effects. After all of the spin-packets bunch up, they will dephase again just like an FID. In other words, a spin echo is a reversed FID followed by a normal FID, which can be Fourier transformed to obtain the EPR spectrum. The longer the time between the pulses becomes, the smaller the echo will be due to spin relaxation. When this relaxation leads to an exponential decay in the echo height, the decay constant is the phase memory time T M , which can have many contributions such as transverse relaxation, spectral, spin and instantaneous diffusion. Changing the times between the pulses leads to a direct measurement of T M as shown in the spin echo decay animation below.
ESEEM [ 3 ] [ 5 ] and pulsed ENDOR [ 4 ] [ 5 ] are widely used echo experiments, in which the interaction of electron spins with the nuclei in their environment can be studied and controlled.
A popular pulsed EPR experiments currently is double electron-electron resonance (DEER), which is also known as pulsed electron-electron double resonance (PELDOR). [ 5 ] In this experiment, two frequencies control two spins to probe their coupling. The distance between the spins can then be inferred from their coupling strength. This information is used to elucidate structures of large bio-molecules. PELDOR spectroscopy is a versatile tool for structural investigations of proteins, even in a cellular environment. [ 8 ] | https://en.wikipedia.org/wiki/Pulsed_electron_paramagnetic_resonance |
A pulsed field gradient is a short, timed pulse with spatial-dependent field intensity. Any gradient is identified by four characteristics: axis, strength, shape and duration.
Pulsed field gradient (PFG) techniques are key to magnetic resonance imaging , spatially selective spectroscopy and studies of diffusion via diffusion ordered nuclear magnetic resonance spectroscopy (DOSY). [ 1 ] [ 2 ] PFG techniques are widely used as an alternative to phase cycling in modern NMR spectroscopy.
The effect of a uniform magnetic field gradient in the z-direction on spin I, is considered to be a rotation around z-axis by an angle = γ I Gz; where Gz is the gradient magnitude (along the z-direction) and γ I is the gyromagnetic ratio of spin I. It introduces a phase factor to the magnetizations:
Φ (z,τ) = (γ I )(Gz)(τ)
The time duration τ is in the order of milliseconds.
This nuclear magnetic resonance –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pulsed_field_gradient |
A pulsed field magnet is a strong electromagnet which is powered by a brief pulse of electric current through its windings rather than a continuous current, producing a brief but strong pulse of magnetic field . Pulsed field magnets are used in research in fields such as materials science to study the effect of strong magnetic fields, since they can produce stronger fields than continuous magnets. The maximum field strength that continuously-powered high-field electromagnets can produce is limited by the enormous waste heat generated in the windings by the large currents required. Therefore by applying brief pulses of current, with time between the pulses to allow the heat to dissipate, stronger currents can be used and thus stronger magnetic fields can be generated. The magnetic field produced by pulsed field magnets can reach between 50 and 100 T , and lasts several tens of milliseconds.
This nuclear magnetic resonance –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pulsed_field_magnet |
The Pummerer rearrangement is an organic reaction whereby an alkyl sulfoxide rearranges to an α- acyloxy – thioether (mono thioacetal -ester) in the presence of acetic anhydride . [ 1 ] [ 2 ] [ 3 ]
The stoichiometry of the reaction is:
Aside from acetic anhydride , trifluoroacetic anhydride and trifluoromethanesulfonic anhydride have been employed as activators. [ 4 ] Common nucleophiles besides acetates are arenes, alkenes, amides, and phenols.
The usage of α-acyl sulfoxides and Lewis acids , such as TiCl 4 and SnCl 4 , allow the reaction to proceed at lower temperatures (0 °C). [ 5 ]
Thionyl chloride can be used in place of acetic anhydride to trigger the elimination for forming the electrophilic intermediate and supplying chloride as the nucleophile to give an α-chloro-thioether: [ 6 ]
Other anhydrides and acyl halides can give similar products. Inorganic acids can also give this reaction. This product can be converted to aldehyde or ketone by hydrolysis . [ 7 ]
The mechanism of the Pummerer rearrangement begins with the acylation of the sulfoxide ( resonance structures 1 and 2 ) by acetic anhydride to give 3 , with acetate as byproduct. The acetate then acts as a catalyst to induce an elimination reaction to produce the cationic- thial structure 4 , with acetic acid as byproduct. Finally, acetate attacks the thial to give the final product 5 .
The activated thial electrophile can be trapped by various intramolecular and intermolecular nucleophiles to form carbon –carbon bonds and carbon–heteroatom bonds.
The intermediate is so electrophilic that even neutral nucleophiles can be used, including aromatic rings with electron donating groups such as 1,3-benzodioxole : [ 8 ]
It is possible to perform the rearrangement using selenium in the place of sulfur. [ 9 ]
When a substituent on the α position can form a stable carbocation , this group rather than the α-hydrogen atom will eliminate in the intermediate step. This variation is called a Pummerer fragmentation . [ 10 ] This reaction type is demonstrated below with a set of sulfoxides and trifluoroacetic anhydride (TFAA):
The organic group "R2" shown in the diagram above on the bottom right is the methyl violet carbocation, whose pK R+ of 9.4 is not sufficient to out-compete loss of H + and therefore a classical Pummerer rearrangement occurs. The reaction on the left is a fragmentation because the leaving group with pK R+ = 23.7 is particularly stable.
The reaction was discovered by Rudolf Pummerer [ de ] , who reported it in 1909. [ 11 ] [ 12 ] | https://en.wikipedia.org/wiki/Pummerer_rearrangement |
The Pump-areometer was an early hydrometer (a variant of the syphon-hydrometer ), [ 1 ] credited to Floris Nollet . [ 2 ]
The principle is an inverted glass tube with one leg in each of two liquids, the upper end being connected to a pump. Once sufficient air is removed from the pipe, the liquids rise in both legs, in inverse proportion to their density. If the density of one liquid is known, that of the other can be simply calculated. A reasonably wide tube is used to minimise the effects of capillary attraction.
More sophisticated "four leg" variants could eliminate the capillary effect on the calculation. [ 3 ]
This standards - or measurement -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pump-areometer |
The Pumpherston retort (also known as the Bryson retort ) was a type of oil-shale retort used in Britain at the end of 19th and beginning of 20th century. It marked separation of the oil-shale industry from the coal industry as it was designed specifically for oil-shale retorting. [ 1 ] The retort is named after Pumpherston , West Lothian , which was one of the major oil shale areas in Great Britain. The retort was commercialized by Pumpherston Oil Company .
The Pumpherston retort was invented and patented in 1894 by William Fraser, James Bryson, and James Jones of Pumpherston Oil Company. [ 2 ] By 1910, 1,528 Pumpherston retorts were used in Scotland. [ 1 ] In addition, the retort was used in Spain and Australia . [ 3 ]
The Pumpherson design was used at Newnes and Torbane , in Australia, by Commonwealth Oil Corporation . The retorts at Newnes were later modified, by adding more off-takes, to make it better suited to oil-rich shale, by John Fell . The resulting design variant was patented by Fell, and was referred to as a 'modified Pumpherson' or 'Fell' retort. [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] That modified design was also used at Glen Davis . [ 4 ] [ 9 ]
The Pumpherston retort was a 35 feet (11 m) high cylindrical vessel containing two main sections. [ 10 ] The upper section was made of iron and the lower section was made of fire bricks. The raw oil shale was fed on the top of retort. Shale oil and oil shale gas were distilled at the upper section at the temperature of 750 to 900 °F (399 to 482 °C). At the lower section, the heat rose to 1,300 °F (704 °C) and steam was added to produce ammonia . The process required approximately 1,000 imperial barrels (160,000 L; 36,000 imp gal; 43,000 US gal) of water equivalent of steam per one ton of oil shale. [ 1 ] [ 10 ] [ 11 ]
The retort had a 15 ton capacity, and the residence time was 24 hours. It was started up by combustion of coal, but after the process started it was switched to the produced oil shale gas. [ 1 ] | https://en.wikipedia.org/wiki/Pumpherston_retort |
In sound recording and reproduction , and music, pumping or gain pumping is a creative misuse of compression , the "audible unnatural level changes associated primarily with the release of a compressor." [ 1 ] There is no correct way to produce pumping, and according to Alex Case, the effect may result from selecting "too slow or too fast...or too, um, medium" attack and release settings. [ 2 ]
The technique is common in rock and electronic dance music . Examples include Phil Selway 's ( Radiohead ) drum track on " Exit Music (For a Film) ", electro percussion loop in Radiohead 's " Idioteque ", Benny Benassi 's " Finger Food ", and the ride cymbals on Portishead 's " Pedestal ". [ 3 ] : 82–83
Side-chain pumping is a more advanced technique using a compressor's side-chain feature which, "uses the amplitude envelope (dynamics profile) of one track as a trigger for a compressor used in another track." When the amplitude of a note of the side-chained instrument surpasses the threshold setting of the compressor it attenuates the compressed instrument, producing volume swells offset from the side-chained note by a selected release time. [ 3 ] : 83 Found in house, techno, IDM, hip hop, dubstep, and drum 'n' bass, Eric Prydz 's " Call On Me " is credited with popularizing the technique, though Daft Punk 's " One More Time " contributed, while clear examples include Madonna 's " Get Together " and Benny Benassi's " My Body (feat. Mia J) ". [ 3 ] : 84 | https://en.wikipedia.org/wiki/Pumping_(audio) |
In the theory of formal languages , the pumping lemma for regular languages is a lemma that describes an essential property of all regular languages . Informally, it says that all sufficiently long strings in a regular language may be pumped —that is, have a middle section of the string repeated an arbitrary number of times—to produce a new string that is also part of the language. The pumping lemma is useful for proving that a specific language is not a regular language, by showing that the language does not have the property.
Specifically, the pumping lemma says that for any regular language L {\displaystyle L} , there exists a constant p {\displaystyle p} such that any string w {\displaystyle w} in L {\displaystyle L} with length at least p {\displaystyle p} can be split into three substrings x {\displaystyle x} , y {\displaystyle y} and z {\displaystyle z} ( w = x y z {\displaystyle w=xyz} , with y {\displaystyle y} being non-empty), such that the strings x z , x y z , x y y z , x y y y z , . . . {\displaystyle xz,xyz,xyyz,xyyyz,...} are also in L {\displaystyle L} . The process of repeating y {\displaystyle y} zero or more times is known as "pumping". Moreover, the pumping lemma guarantees that the length of x y {\displaystyle xy} will be at most p {\displaystyle p} , thus giving a "small" substring x y {\displaystyle xy} that has the desired property.
Languages with a finite number of strings vacuously satisfy the pumping lemma by having p {\displaystyle p} equal to the maximum string length in L {\displaystyle L} plus one. By doing so, zero strings in L {\displaystyle L} have length greater than p {\displaystyle p} .
The pumping lemma was first proven by Michael Rabin and Dana Scott in 1959, [ 1 ] and rediscovered shortly after by Yehoshua Bar-Hillel , Micha A. Perles , and Eli Shamir in 1961, as a simplification of their pumping lemma for context-free languages . [ 2 ] [ 3 ]
Let L {\displaystyle L} be a regular language. Then there exists an integer p ≥ 1 {\displaystyle p\geq 1} depending only on L {\displaystyle L} such that every string w {\displaystyle w} in L {\displaystyle L} of length at least p {\displaystyle p} ( p {\displaystyle p} is called the "pumping length") [ 4 ] can be written as w = x y z {\displaystyle w=xyz} (i.e., w {\displaystyle w} can be divided into three substrings), satisfying the following conditions:
y {\displaystyle y} is the substring that can be pumped (removed or repeated any number of times, and the resulting string is always in L {\displaystyle L} ). (1) means the loop y {\displaystyle y} to be pumped must be of length at least one, that is, not an empty string; (2) means the loop must occur within the first p {\displaystyle p} characters. | x | {\displaystyle |x|} must be smaller than p {\displaystyle p} (conclusion of (1) and (2)), but apart from that, there is no restriction on x {\displaystyle x} and z {\displaystyle z} .
In simple words, for any regular language L {\displaystyle L} , any sufficiently long string w {\displaystyle w} (in L {\displaystyle L} ) can be split into 3 parts, i.e. w = x y z {\displaystyle w=xyz} , such that all the strings x y n z {\displaystyle xy^{n}z} for n ≥ 0 {\displaystyle n\geq 0} are also in L {\displaystyle L} .
Below is a formal expression of the pumping lemma.
∀ L ⊆ Σ ∗ , regular ( L ) ⟹ ∃ p ≥ 1 , ∀ w ∈ L , | w | ≥ p ⟹ ∃ x , y , z ∈ Σ ∗ , ( w = x y z ) ∧ ( | y | ≥ 1 ) ∧ ( | x y | ≤ p ) ∧ ( ∀ n ≥ 0 , x y n z ∈ L ) {\displaystyle {\begin{array}{l}\forall L\subseteq \Sigma ^{*},{\mbox{regular}}(L)\implies \\\quad \exists p\geq 1,\forall w\in L,|w|\geq p\implies \\\qquad \exists x,y,z\in \Sigma ^{*},(w=xyz)\land (|y|\geq 1)\land (|xy|\leq p)\land (\forall n\geq 0,xy^{n}z\in L)\end{array}}}
The pumping lemma is often used to prove that a particular language is non-regular: a proof by contradiction may consist of exhibiting a string (of the required length) in the language that lacks the property outlined in the pumping lemma.
Example: The language L = { a n b n : n ≥ 0 } {\displaystyle L=\{a^{n}b^{n}:n\geq 0\}} over the alphabet Σ = { a , b } {\displaystyle \Sigma =\{a,b\}} can be shown to be non-regular as follows:
The proof that the language of balanced (i.e., properly nested) parentheses is not regular follows the same idea. Given p {\displaystyle p} , there is a string of balanced parentheses that begins with more than p {\displaystyle p} left parentheses, so that y {\displaystyle y} will consist entirely of left parentheses. By repeating y {\displaystyle y} , a string can be produced that does not contain the same number of left and right parentheses, and so they cannot be balanced.
For every regular language there is a finite-state automaton (FSA) that accepts the language. The number of states in such an FSA are counted and that count is used as the pumping length p {\displaystyle p} . For a string of length at least p {\displaystyle p} , let q 0 {\displaystyle q_{0}} be the start state and let q 1 , . . . , q p {\displaystyle q_{1},...,q_{p}} be the sequence of the next p {\displaystyle p} states visited as the string is emitted. Because the FSA has only p {\displaystyle p} states, within this sequence of p + 1 {\displaystyle p+1} visited states there must be at least one state that is repeated. Write q s {\displaystyle q_{s}} for such a state. The transitions that take the machine from the first encounter of state q s {\displaystyle q_{s}} to the second encounter of state q s {\displaystyle q_{s}} match some string. This string is called y {\displaystyle y} in the lemma, and since the machine will match a string without the y {\displaystyle y} portion, or with the string y {\displaystyle y} repeated any number of times, the conditions of the lemma are satisfied.
For example, the following image shows an FSA.
The FSA accepts the string: abcd . Since this string has a length at least as large as the number of states, which is four (so the total number of states that the machine passes through to scan abcd would be 5), the pigeonhole principle indicates that there must be at least one repeated state among the start state and the next four visited states. In this example, only q 1 {\displaystyle q_{1}} is a repeated state. Since the substring bc takes the machine through transitions that start at state q 1 {\displaystyle q_{1}} and end at state q 1 {\displaystyle q_{1}} , that portion could be repeated and the FSA would still accept, giving the string abcbcd . Alternatively, the bc portion could be removed and the FSA would still accept giving the string ad . In terms of the pumping lemma, the string abcd is broken into an x {\displaystyle x} portion a , a y {\displaystyle y} portion bc and a z {\displaystyle z} portion d .
As a side remark, the problem of checking whether a given string can be accepted by a given nondeterministic finite automaton without visiting any state repeatedly, is NP hard .
If a language L {\displaystyle L} is regular, then there exists a number p ≥ 1 {\displaystyle p\geq 1} (the pumping length) such that every string u w v {\displaystyle uwv} in L {\displaystyle L} with | w | ≥ p {\displaystyle |w|\geq p} can be written in the form
with strings x {\displaystyle x} , y {\displaystyle y} and z {\displaystyle z} such that | x y | ≤ p {\displaystyle |xy|\leq p} , | y | ≥ 1 {\displaystyle |y|\geq 1} and
From this, the above standard version follows a special case, with both u {\displaystyle u} and v {\displaystyle v} being the empty string.
Since the general version imposes stricter requirements on the language, it can be used to prove the non-regularity of many more languages.
While the pumping lemma states that all regular languages satisfy the conditions described above, the converse of this statement is not true: a language that satisfies these conditions may still be non-regular. In other words, both the original and the general version of the pumping lemma give a necessary but not sufficient condition for a language to be regular.
For example, consider the following language:
In other words, L {\displaystyle L} contains all strings over the alphabet { 0 , 1 , 2 , 3 } {\displaystyle \{0,1,2,3\}} with a substring of length 3 including a duplicate character, as well as all strings over this alphabet where precisely 1/7 of the string's characters are 3's. This language is not regular but can still be "pumped" with p = 5 {\displaystyle p=5} . Suppose some string s has length at least 5. Then, since the alphabet has only four characters, at least two of the first five characters in the string must be duplicates. They are separated by at most three characters.
The Myhill–Nerode theorem provides a test that exactly characterizes regular languages. The typical method for proving that a language is regular is to construct either a finite-state machine or a regular expression for the language. | https://en.wikipedia.org/wiki/Pumping_lemma_for_regular_languages |
A pumpjack is the overground drive for a reciprocating piston pump in an oil well . [ 1 ]
It is used to mechanically lift liquid out of the well if there is not enough bottom hole pressure for the liquid to flow all the way to the surface. The arrangement is often used for onshore wells. Pumpjacks are common in oil-rich areas .
Depending on the size of the pump, it generally produces 5 to 40 litres (1 to 9 imp gal; 1.5 to 10.5 US gal) of liquid at each stroke. Often this is an emulsion of crude oil and water. Pump size is also determined by the depth and weight of the oil to remove, with deeper extraction requiring more power to move the increased weight of the discharge column (discharge head).
A beam-type pumpjack converts the rotary motion of the motor (usually an electric motor ) to the vertical reciprocating motion necessary to drive the polished-rod and accompanying sucker rod and column (fluid) load. The engineering term for this type of mechanism is a walking beam . It was often employed in stationary and marine steam engine designs in the 18th and 19th centuries.
A pumpjack is also called a beam pump, walking beam pump, horsehead pump, nodding donkey pump (donkey pumper), rocking horse pump, grasshopper pump, sucker rod pump, dinosaur pump, Big Texan pump, thirsty bird pump, hobby horse, or just pumping unit. [ 2 ]
In the early days, pumpjacks worked by rod lines running horizontally above the ground to a wheel on a rotating eccentric in a mechanism known as a central power. [ 3 ] The central power, which might operate a dozen or more pumpjacks, would be powered by a steam or internal combustion engine or by an electric motor. Among the advantages of this scheme was only having one prime mover to power all the pumpjacks rather than individual motors for each. However, among the many difficulties was maintaining system balance as individual well loads changed.
Modern pumpjacks are powered by a prime mover. This is commonly an electric motor, but internal combustion engines are used in isolated locations without access to electricity, or, in the cases of water pumpjacks, where three-phase power is not available (while single phase motors exist at least up to 60 horsepower or 45 kilowatts, [ 4 ] providing power to single-phase motors above 10 horsepower or 7.5 kilowatts can cause powerline problems, notably voltage sag on startup, [ 5 ] and many pumps require more than 10 horsepower). Common off-grid pumpjack engines run on natural gas , often casing gas produced from the well, but pumpjacks have been run on many types of fuel, such as propane and diesel fuel . In harsh climates, such motors and engines may be housed in a shack for protection from the elements. Engines that power water pumpjacks often receive natural gas from the nearest available gas grid .
The prime mover runs a set of pulleys to the transmission, often a double-reduction gearbox , which drives a pair of cranks , generally with counterweights installed on them to offset the weight of the heavy rod assembly. The cranks raise and lower one end of an I-beam which is free to move on an A-frame . On the other end of the beam is a curved metal box called a horse head or donkey head, so named due to its appearance. A cable made of steel—occasionally, fibreglass —, called a bridle, connects the horse head to the polished rod, a piston that passes through the stuffing box .
The cranks themselves also produce counterbalance due to their weight, so on pumpjacks that do not carry very heavy loads, the weight of the cranks themselves may be enough to balance the well load.
Sometimes, however, crank-balanced units can become prohibitively heavy due to the need for counterweights. Lufkin Industries offer "air-balanced" units, where counterbalance is provided by a pneumatic cylinder charged with air from a compressor , eliminating the need for counterweights.
The polished rod has a close fit to the stuffing box, letting it move in and out of the tubing without fluid escaping. (The tubing is a pipe that runs to the bottom of the well through which the liquid is produced.) The bridle follows the curve of the horse head as it lowers and raises to create a vertical or nearly-vertical stroke. The polished rod is connected to a long string of rods called sucker rods, which run through the tubing to the down-hole pump, usually positioned near the bottom of the well.
At the bottom of the tubing is the down-hole pump. This pump has two ball check valves : a stationary valve at bottom called the standing valve, and a valve on the piston connected to the bottom of the sucker rods that travels up and down as the rods reciprocate, known as the traveling valve. Reservoir fluid enters from the formation into the bottom of the borehole through perforations that have been made through the casing and cement (the casing is a larger metal pipe that runs the length of the well, which has cement placed between it and the earth; the tubing, pump, and sucker rod are all inside the casing).
When the rods at the pump end are travelling up, the traveling valve is closed and the standing valve is open (due to the drop in pressure in the pump barrel). Consequently, the pump barrel fills with the fluid from the formation as the traveling piston lifts the previous contents of the barrel upwards. When the rods begin pushing down, the traveling valve opens and the standing valve closes (due to an increase in pressure in the pump barrel). The traveling valve drops through the fluid in the barrel (which had been sucked in during the upstroke). The piston then reaches the end of its stroke and begins its path upwards again, repeating the process.
Often, gas is produced through the same perforations as the oil. This can be problematic if gas enters the pump, because it can result in what is known as gas locking, where insufficient pressure builds up in the pump barrel to open the valves (due to compression of the gas) and little or nothing is pumped. To preclude this, the inlet for the pump can be placed below the perforations. As the gas-laden fluid enters the well bore through the perforations, the gas bubbles up the annulus (the space between the casing and the tubing) while the liquid moves down to the standing valve inlet. Once at the surface, the gas is collected through piping connected to the annulus.
Pumpjacks can also be used to drive what would now be considered old-fashioned hand-pumped water wells . The scale of the technology is frequently smaller than for an oil well, and can typically fit on top of an existing hand-pumped well head. The technology is simple, typically using a parallel-bar double-cam lift driven from a low-power electric motor, although the number of pumpjacks with stroke lengths 54 inches (1.4 m) and longer being used as water pumps is increasing. A short video recording of such a pump in action can be viewed on YouTube. [ 6 ]
Although the flow rate for a water well pumpjack is lower than that from a jet pump and the lifted water is not pressurised, the beam pumping unit has the option of hand pumping in an emergency, by hand-rotating the pumpjack cam to its lowest position, and attaching a manual handle to the top of the wellhead rod. In larger pumpjacks powered by engines, the engine can run off fuel stored in a reservoir or from natural gas delivered from the nearest gas grid . In some cases, this type of pump consumes less power than a jet pump and is, therefore, cheaper to run. | https://en.wikipedia.org/wiki/Pumpjack |
PumpLinx is a 3-D computational fluid dynamics (CFD) software developed for the analysis of fluid pumps, motors, compressors, valves, propellers, hydraulic systems, and other fluid devices with rotating or sliding components.
The software imports 3-D geometry from CAD data in the form of STL files. [ 1 ] It has geometry Conformal Adaptive Binary-Tree mesh generation tool which creates 3-D grid from CAD surfaces. For liquid devices, PumpLinx has a cavitation model to account for the effect of liquid vapor, free/dissolved gas, and liquid compressibility.
PumpLinx provides templates for different categories of devices, including: axial piston pumps , centrifugal pumps , gerotors , gear pumps , progressive cavity pumps , propellers , radial piston pumps , rotary vane pumps , submersible pumps , and valves .
Those templates create an initial grid for special rotors; for example, grids around gears of a gear pump, and then re-meshes the grid for a moving simulation, and provide device specific input and output. The output from the code include velocities, pressures, temperatures, and gas volume fractions of the flow field, together with integrated engineering data such as loads and torques.
PumpLinx uses a single Graphical User Interface (GUI) for grid generation, model set-up, execution, and post processing.
The software is used primarily by component and system engineers in the automotive, [ 2 ] hydraulic, [ 3 ] and aerospace industry as a virtual test-bed to study efficiency, cavitation, pressure ripple, and noise for hydrodynamic pumps, [ 4 ] and fluid power equipment. | https://en.wikipedia.org/wiki/Pumplinx |
Pump–probe microscopy is a non-linear optical imaging modality used in femtochemistry to study chemical reactions . It generates high-contrast images from endogenous non-fluorescent targets. It has numerous applications, including materials science , medicine , and art restoration .
The classic method of nonlinear absorption used by microscopists is conventional two-photon fluorescence , in which two photons from a single source interact to excite a photoelectron. The electron then emits a photon as it transitions back to its ground state. This microscopy method has been revolutionary in biological sciences because of its inherent three-dimensional optical sectioning capabilities.
Two-photon absorption is inherently a nonlinear process : fluorescent output intensity is proportional to the square of the excitation light intensity. This ensures that fluorescence is only generated within the focus of a laser beam, as the intensity outside of this plane is insufficient to excite a photoelectron. [ 1 ]
However, this microscope modality is inherently limited by the number of biological molecules that can undergo both two-photon excitation and fluorescence . [ 2 ]
Pump–probe microscopy circumvents this limitation by directly measuring excitation light. This expands the number of potential targets to any molecule capable of two-photon absorption, even if it does not fluoresce upon relaxation. [ 3 ] The method modulates the amplitude of a pulsed laser beam, referred to as the pump , to bring the target molecule to an excited state . This will then affect the properties of a second coherent beam, referred to as the probe , based on the interaction of the two beams with the molecule. These properties are then measured by a detector to form an image.
Because pump–probe microscopy does not rely on fluorescent targets, the modality takes advantage of multiple different types of multiphoton absorption.
Two-photon absorption (TPA) is a third-order process in which two photons are nearly simultaneously absorbed by the same molecule. If a second photon is absorbed by the same electron within the same quantum event, the electron enters an excited state . [ 4 ]
This is the same phenomenon used in two-photon microscopy (TPM), but there are two key features that distinguish pump–probe microscopy from TPM. First, since the molecule is not necessarily fluorescent, a photodetector measures the probe intensity. Therefore, the signal decreases as two-photon absorption occurs, the reverse of TPM. [ 3 ]
Second, pump–probe microscopy uses spectrally separated sources for each photon, whereas conventional TPM uses one source of a single wavelength. This is referred to as degenerate two-photon excitation. [ 3 ]
Excited-state absorption (ESA) occurs when the pump beam sends an electron into an excited state, then the probe beam sends the electron into a higher excited state. This differs from TPA primarily in the timescale over which it occurs. Since an electron can remain in an excited state for a period of nanoseconds , thus requiring longer pulse durations than TPA. [ 5 ]
Pump–probe microscopy can also measure stimulated emission . In this case, the pump beam drives the electron to an excited state. Then the electron emits a photon when exposed to the probe beam. This interaction increases the probe signal at the detector site.
Ground-state depletion occurs when the pump beam sends the electron into an excited state. However, unlike in ESA, the probe beam cannot send an electron into a secondary excited state. Instead, it sends remaining electrons from the ground state to the first excited state. However, since the pump beam has decreased the number of electrons in the ground state, fewer probe photons are absorbed, and the probe signal increases at the detector site. [ 3 ]
Cross-phase modulation is caused by the Kerr effect , in which the refractive index of the specimen changes in the presence of a large electric field . [ 6 ] In this case, the pump beam modulates the phase of the probe, which can then be measured through interferometric techniques . In certain cases, referred to as cross-phase modulation spectral shifting , this phase change induces a change to the pump spectrum that can be detected with a spectral filter. [ 3 ]
Measuring nonlinear optical interactions requires a high level of instantaneous power and very precise timing. In order to achieve the high number of photons needed to generate these interactions while avoiding damage of delicate specimens, these microscopes require a modelocked laser . These lasers can achieve very high photon counts on the femtosecond timescale and maintain a low average power. Most systems use a Ti:Sapph gain medium due to the wide range of wavelengths that it can access. [ 3 ] [ 7 ]
Typically, the same source is used to generate the pump and the probe. An optical parametric oscillator (OPO) is used to convert the probe beam to the desired wavelength. The probe wavelength can be tuned over a large range for spectroscopic applications. [ 7 ]
However, for certain types of two-photon interactions, it is possible to use separate pulsed sources. [ 3 ] This is only possible with interactions such as excited-state absorption, in which the electrons remain in the excited state for several picoseconds. However, it is more common to use a single femtosecond source with two separate beam paths of different lengths to modulate timing between the pump and probe beams. [ 3 ] [ 7 ]
The pump beam amplitude is modulated using an acousto-optic or electro-optic modulator on the order of 10 7 Hz. The pump and probe beams are then recombined using a dichroic beamsplitter and scanned using galvanometric mirrors for point-by-point image generation before being focused onto the sample. [ 3 ]
The signal generated by probe modulation is much smaller than the original pump beam, so the two are spectrally separated within the detection path using a dichroic mirror . The probe signal can be collected with many different types of photodetectors , typically a photodiode . Then, the modulated signal is amplified using a lock-in amplifier tuned to the pump modulation frequency. [ 3 ]
Similar to hyperspectral data analysis, the pump–probe imaging data, known as a delay stack, has to be processed to obtain an image with molecular contrast of the underlying molecular species. [ 3 ] Processing pump–probe data is challenging for several reasons – for example, the signals are bipolar (positive and negative), multi-exponential, and can be significantly altered by subtle changes in the chemical environment. [ 8 ] [ 9 ] The main methods for analysis of pump–probe data are multi-exponential fitting, principal component analysis , and phasor analysis . [ 3 ] [ 7 ]
In multi-exponential fitting, the time-resolved curves are fitted with an exponential decay model to determine the decay constants. While this method is straightforward, it has low accuracy. [ 7 ]
Principal component analysis (PCA) was one of the earliest methods used for pump–probe data analysis, as it is commonly used for hyperspectral data analysis. PCA decomposes the data into orthogonal components. In melanoma studies, the principal components have shown good agreement with the signals obtained from the different forms of melanin . [ 10 ] An advantage of PCA is that noise can be reduced by keeping only the principal components that account for majority of the variance in the data. However, the principal components do not necessarily reflect actual properties of the underlying chemical species, which are typically non-orthogonal. [ 3 ] Therefore, a limitation is that the number of unique chemical species cannot be inferred using PCA. [ 3 ]
Phasor analysis is commonly used for fluorescence-lifetime imaging microscopy (FLIM) data analysis [ 11 ] and has been adapted for pump–probe imaging data analysis. [ 8 ] Signals are decomposed into their real and imaginary parts of the Fourier transform at a given frequency. By plotting the real and imaginary parts against one another, the distribution of different chromophores with distinct lifetimes can be mapped. [ 3 ] [ 7 ] In melanoma studies, this approach has again shown to be able to distinguish between the different forms of melanin. [ 8 ] One of the main advantages of phasor analysis is that it provides an intuitive qualitative, graphical view of the content [ 7 ] It has also been combined with PCA for quantitative analysis. [ 12 ]
The development of high-speed and high-sensitivity pump–probe imaging techniques has enabled applications in several fields, such as materials science, biology, and art. [ 3 ] [ 7 ]
Pump–probe imaging is ideal for the study and characterization of nanomaterials, such as graphene, [ 13 ] nanocubes, [ 14 ] nanowires, [ 15 ] and a variety of semiconductors, [ 16 ] [ 17 ] due to their large susceptibilities but weak fluorescence. In particular, single-walled carbon nanotubes have been extensively studied and imaged with submicrometer resolution, [ 18 ] providing details about carrier dynamics, photophysical, and photochemical properties. [ 19 ] [ 20 ] [ 21 ]
The first application of the pump–probe technique in biology was in vitro imaging of stimulated emission of a dye-labelled cell. [ 22 ] Pump–probe imaging is now widely used for melanin imaging to differentiate between the two main forms of melanin – eumelanin (brown/black) and pheomelanin (red/yellow). [ 23 ] In melanoma, eumelanin is substantially increased. Therefore, imaging the distribution of eumelanin and pheomelanin can help to distinguish benign lesions and melanoma with high sensitivity [ 24 ]
Artwork consists of many pigments with a wide range of spectral absorption properties, which determine their color. Due to the broad spectral features of these pigments, the identification of a specific pigment in a mixture is difficult. Pump–probe imaging can provide accurate, high-resolution, molecular information [ 25 ] and distinguish between pigments that may even have the same visual color. [ 26 ] | https://en.wikipedia.org/wiki/Pump–probe_microscopy |
A punched card (also punch card [ 1 ] or punched-card [ 2 ] ) is a stiff paper-based medium used to store digital information via the presence or absence of holes in predefined positions. Developed over the 18th to 20th centuries, punched cards were widely used for data processing , the control of automated machines , and computing . Early applications included controlling weaving looms and recording census data.
Punched cards were widely used in the 20th century, where unit record machines , organized into data processing systems , used punched cards for data input , data output, and data storage . [ 3 ] [ 4 ] The IBM 12-row/80-column punched card format came to dominate the industry. Many early digital computers used punched cards as the primary medium for input of both computer programs and data . Punched cards were used for decades before being replaced by magnetic storage and terminals. Their influence persists in cultural references, standardized data layouts, and computing conventions such as 80-character line widths.
Data can be entered onto a punched card using a keypunch .
While punched cards are now obsolete as a storage medium , as of 2012, some voting machines still used punched cards to record votes. [ 5 ] Punched cards also had a significant cultural impact in the 20th century.
The idea of control and data storage via punched holes was developed independently on several occasions in the modern period. In most cases there is no evidence that each of the inventors was aware of the earlier work.
Basile Bouchon developed the control of a loom by punched holes in paper tape in 1725. The design was improved by his assistant Jean-Baptiste Falcon and by Jacques Vaucanson . [ 6 ] Although these improvements controlled the patterns woven, they still required an assistant to operate the mechanism.
In 1804 Joseph Marie Jacquard demonstrated a mechanism to automate loom operation. A number of punched cards were linked into a chain of any length. Each card held the instructions for shedding (raising and lowering the warp ) and selecting the shuttle for a single pass. [ 7 ]
Semyon Korsakov was reputedly the first to propose punched cards in informatics for information store and search. Korsakov announced his new method and machines in September 1832. [ 8 ]
Charles Babbage proposed the use of "Number Cards", "pierced with certain holes and stand[ing] opposite levers connected with a set of figure wheels ... advanced they push in those levers opposite to which there are no holes on the cards and thus transfer that number together with its sign" in his description of the Calculating Engine's Store. [ 9 ] There is no evidence that he built a practical example.
In 1881, Jules Carpentier developed a method of recording and playing back performances on a harmonium using punched cards. The system was called the Mélographe Répétiteur and "writes down ordinary music played on the keyboard dans le langage de Jacquard", [ 10 ] that is as holes punched in a series of cards. By 1887 Carpentier had separated the mechanism into the Melograph which recorded the player's key presses and the Melotrope which played the music. [ 11 ] [ 12 ]
At the end of the 1800s Herman Hollerith created a method for recording data on a medium that could then be read by a machine, [ 13 ] [ 14 ] [ 15 ] [ 16 ] developing punched card data processing technology for the 1890 U.S. census . [ 17 ] This was inspired in part by Jacquard loom weaving technology and by railway punch photographs. [ 18 ] Punch photographs were quick ways for conductors to mark a ticket with a description of the ticket buyer (e.g., short or tall, dark or light hair). [ 18 ] They were used to reduce ticket fraud, as conductors could "read" the punched holes to get a basic description of the person to whom the ticket was sold. [ 18 ]
Hollerith's tabulating machines read and summarized data stored on punched cards and they began use for government and commercial data processing. Initially, these electromechanical machines only counted holes, but by the 1920s they had units for carrying out basic arithmetic operations. [ 19 ] : 124
Hollerith founded the Tabulating Machine Company (1896) which was one of four companies that were amalgamated via stock acquisition to form a fifth company, Computing-Tabulating-Recording Company (CTR) in 1911, later renamed International Business Machines Corporation (IBM) in 1924. Other companies entering the punched card business included The Tabulator Limited (Britain, 1902), Deutsche Hollerith-Maschinen Gesellschaft mbH (Dehomag) (Germany, 1911), Powers Accounting Machine Company (US, 1911), Remington Rand (US, 1927), and H.W. Egli Bull (France, 1931). [ 20 ] These companies, and others, manufactured and marketed a variety of punched cards and unit record machines for creating, sorting, and tabulating punched cards, even after the development of electronic computers in the 1950s.
Both IBM and Remington Rand tied punched card purchases to machine leases, a violation of the US 1914 Clayton Antitrust Act . In 1932, the US government took both to court on this issue. Remington Rand settled quickly. IBM viewed its business as providing a service and that the cards were part of the machine. IBM fought all the way to the Supreme Court and lost in 1936; the court ruled that IBM could only set card specifications. [ 21 ] [ 22 ] : 300–301
"By 1937... IBM had 32 presses at work in Endicott, N.Y., printing, cutting and stacking five to 10 million punched cards every day." [ 23 ] Punched cards were even used as legal documents, such as U.S. Government checks [ 24 ] and savings bonds. [ 25 ]
During World War II punched card equipment was used by the Allies in some of their efforts to decrypt Axis communications. See, for example, Central Bureau in Australia. At Bletchley Park in England, "some 2 million punched cards a week were being produced, indicating the sheer scale of this part of the operation". [ 26 ] In Nazi Germany, punched cards were used for the censuses of various regions and other purposes [ 27 ] [ 28 ] (see IBM and the Holocaust ).
Punched card technology developed into a powerful tool for business data-processing. By 1950 punched cards had become ubiquitous in industry and government. "Do not fold, spindle or mutilate," a warning that appeared on some punched cards distributed as documents such as checks and utility bills to be returned for processing, became a motto for the post- World War II era. [ 29 ] [ 30 ]
In 1956 [ 31 ] IBM signed a consent decree requiring, amongst other things, that IBM would by 1962 have no more than one-half of the punched card manufacturing capacity in the United States. Tom Watson Jr.'s decision to sign this decree, where IBM saw the punched card provisions as the most significant point, completed the transfer of power to him from Thomas Watson Sr . [ 22 ]
The Univac UNITYPER introduced magnetic tape for data entry in the 1950s. During the 1960s, the punched card was gradually replaced as the primary means for data storage by magnetic tape , as better, more capable computers became available. Mohawk Data Sciences introduced a magnetic tape encoder in 1965, a system marketed as a keypunch replacement which was somewhat successful. Punched cards were still commonly used for entering both data and computer programs until the mid-1980s when the combination of lower cost magnetic disk storage , and affordable interactive terminals on less expensive minicomputers made punched cards obsolete for these roles as well. [ 32 ] : 151 However, their influence lives on through many standard conventions and file formats. The terminals that replaced the punched cards, the IBM 3270 for example, displayed 80 columns of text in text mode , for compatibility with existing software. Some programs still operate on the convention of 80 text columns, although fewer and fewer do as newer systems employ graphical user interfaces with variable-width type fonts.
The terms punched card , punch card , and punchcard were all commonly used, as were IBM card and Hollerith card (after Herman Hollerith ). [ 1 ] IBM used "IBM card" or, later, "punched card" at first mention in its documentation and thereafter simply "card" or "cards". [ 34 ] [ 35 ] Specific formats were often indicated by the number of character positions available, e.g. 80-column card . A sequence of cards that is input to or output from some step in an application's processing is called a card deck or simply deck . The rectangular, round, or oval bits of paper punched out were called chad ( chads ) or chips (in IBM usage). Sequential card columns allocated for a specific use, such as names, addresses, multi-digit numbers, etc., are known as a field . The first card of a group of cards, containing fixed or indicative information for that group, is known as a master card . Cards that are not master cards are detail cards .
The Hollerith punched cards used for the 1890 U.S. census were blank. [ 36 ] Following that, cards commonly had printing such that the row and column position of a hole could be easily seen. Printing could include having fields named and marked by vertical lines, logos, and more. [ 37 ] "General purpose" layouts (see, for example, the IBM 5081 below) were also available. For applications requiring master cards to be separated from following detail cards, the respective cards had different upper corner diagonal cuts and thus could be separated by a sorter. [ 38 ] Other cards typically had one upper corner diagonal cut so that cards not oriented correctly, or cards with different corner cuts, could be identified.
Herman Hollerith was awarded three patents [ 40 ] in 1889 for electromechanical tabulating machines . These patents described both paper tape and rectangular cards as possible recording media. The card shown in U.S. patent 395,781 of January 8 was printed with a template and had hole positions arranged close to the edges so they could be reached by a railroad conductor 's ticket punch , with the center reserved for written descriptions. Hollerith was originally inspired by railroad tickets that let the conductor encode a rough description of the passenger:
I was traveling in the West and I had a ticket with what I think was called a punch photograph...the conductor...punched out a description of the individual, as light hair, dark eyes, large nose, etc. So you see, I only made a punch photograph of each person. [ 19 ] : 15
When use of the ticket punch proved tiring and error-prone, Hollerith developed the pantograph "keyboard punch". It featured an enlarged diagram of the card, indicating the positions of the holes to be punched. A printed reading board could be placed under a card that was to be read manually. [ 36 ] : 43
Hollerith envisioned a number of card sizes. In an article he wrote describing his proposed system for tabulating the 1890 U.S. census , Hollerith suggested a card 3 by 5 + 1 ⁄ 2 inches (7.6 by 14.0 cm) of Manila stock "would be sufficient to answer all ordinary purposes." [ 41 ] The cards used in the 1890 census had round holes, 12 rows and 24 columns. A reading board for these cards can be seen at the Columbia University Computing History site. [ 42 ] At some point, 3 + 1 ⁄ 4 by 7 + 3 ⁄ 8 inches (83 by 187 mm) became the standard card size. These are the dimensions of the then-current paper currency of 1862–1923. [ 43 ] This size was needed in order to use available banking-type storage for the 60,000,000 punched cards to come nationwide. [ 42 ]
Hollerith's original system used an ad hoc coding system for each application, with groups of holes assigned specific meanings, e.g. sex or marital status. His tabulating machine had up to 40 counters, each with a dial divided into 100 divisions, with two indicator hands; one which stepped one unit with each counting pulse, the other which advanced one unit every time the other dial made a complete revolution. This arrangement allowed a count up to 9,999. During a given tabulating run counters were assigned specific holes or, using relay logic , combination of holes. [ 41 ]
Later designs led to a card with ten rows, each row assigned a digit value, 0 through 9, and 45 columns. [ 44 ] This card provided for fields to record multi-digit numbers that tabulators could sum, instead of their simply counting cards. Hollerith's 45 column punched cards are illustrated in Comrie 's The application of the Hollerith Tabulating Machine to Brown's Tables of the Moon . [ 45 ]
By the late 1920s, customers wanted to store more data on each punched card. Thomas J. Watson Sr. , IBM's head, asked two of his top inventors, Clair D. Lake and J. Royden Pierce , to independently develop ways to increase data capacity without increasing the size of the punched card. Pierce wanted to keep round holes and 45 columns but to allow each column to store more data; Lake suggested rectangular holes, which could be spaced more tightly, allowing 80 columns per punched card, thereby nearly doubling the capacity of the older format. [ 46 ] Watson picked the latter solution, introduced as The IBM Card , in part because it was compatible with existing tabulator designs and in part because it could be protected by patents and give the company a distinctive advantage. [ 47 ]
This IBM card format, introduced in 1928, [ 48 ] has rectangular holes, 80 columns, and 10 rows. [ 49 ] Card size is 7 + 3 ⁄ 8 by 3 + 1 ⁄ 4 inches (187 by 83 mm). The cards are made of smooth stock, 0.007 inches (180 μm) thick. There are about 143 cards to the inch (56/cm). In 1964, IBM changed from square to round corners. [ 50 ] They come typically in boxes of 2,000 cards [ 51 ] or as continuous form cards. Continuous form cards could be both pre-numbered and pre-punched for document control (checks, for example). [ 52 ]
Initially designed to record responses to yes–no questions , support for numeric, alphabetic and special characters was added through the use of columns and zones. The top three positions of a column are called zone punching positions , 12 (top), 11, and 0 (0 may be either a zone punch or a digit punch). [ 53 ] For decimal data the lower ten positions are called digit punching positions , 0 (top) through 9. [ 53 ] An arithmetic sign can be specified for a decimal field by overpunching the field's rightmost column with a zone punch: 12 for plus, 11 for minus (CR). For Pound sterling pre-decimalization currency a penny column represents the values zero through eleven; 10 (top), 11, then 0 through 9 as above. An arithmetic sign can be punched in the adjacent shilling column. [ 54 ] : 9 Zone punches had other uses in processing, such as indicating a master card. [ 55 ]
Diagram: [ 56 ]
Note: The 11 and 12 zones were also called the X and Y zones, respectively.
In 1931, IBM began introducing upper-case letters and special characters (Powers-Samas had developed the first commercial alphabetic punched card representation in 1921). [ 57 ] [ 58 ] [ nb 1 ] The 26 letters have two punches (zone [12,11,0] + digit [1–9]). The languages of Germany, Sweden, Denmark, Norway, Spain, Portugal and Finland require up to three additional letters; their punching is not shown here. [ 59 ] : 88–90 Most special characters have two or three punches (zone [12,11,0, or none] + digit [2–7] + 8); a few special characters were exceptions: "&" is 12 only, "-" is 11 only, and "/" is 0 + 1). The Space character has no punches. [ 59 ] : 38 The information represented in a column by a combination of zones [12, 11, 0] and digits [0–9] is dependent on the use of that column. For example, the combination "12-1" is the letter "A" in an alphabetic column, a plus signed digit "1" in a signed numeric column, or an unsigned digit "1" in a column where the "12" has some other use. The introduction of EBCDIC in 1964 defined columns with as many as six punches (zones [12,11,0,8,9] + digit [1–7]). IBM and other manufacturers used many different 80-column card character encodings . [ 60 ] [ 61 ] A 1969 American National Standard defined the punches for 128 characters and was named the Hollerith Punched Card Code (often referred to simply as Hollerith Card Code ), honoring Hollerith. [ 59 ] : 7
For some computer applications, binary formats were used, where each hole represented a single binary digit (or " bit "), every column (or row) is treated as a simple bit field , and every combination of holes is permitted.
For example, on the IBM 701 [ 62 ] and IBM 704 , [ 63 ] card data was read, using an IBM 711 , into memory in row binary format. For each of the twelve rows of the card, 72 of the 80 columns, skipping the other eight, would be read into two 36-bit words, requiring 864 bits to store the whole card; a control panel was used to select the 72 columns to be read. Software would translate this data into the desired form. One convention was to use columns 1 through 72 for data, and columns 73 through 80 to sequentially number the cards, as shown in the picture above of a punched card for FORTRAN. Such numbered cards could be sorted by machine so that if a deck was dropped the sorting machine could be used to arrange it back in order. This convention continued to be used in FORTRAN, even in later systems where the data in all 80 columns could be read.
The IBM card readers 3504, 3505 and the multifunction unit 3525 used a different encoding scheme for column binary data, also known as card image , where each column, split into two rows of 6 (12–3 and 4–9) was encoded into two 8-bit bytes, holes in each group represented by bits 2 to 7 (MSb numbering , bit 0 and 1 unused ) in successive bytes. This required 160 8-bit bytes, or 1280 bits, to store the whole card. [ 64 ]
As an aid to humans who had to deal with the punched cards, the IBM 026 and later 029 and 129 key punch machines could print human-readable text above each of the 80 columns.
As a prank, punched cards could be made where every possible punch position had a hole. Such " lace cards " lacked structural strength, and would frequently buckle and jam inside the machine. [ 65 ]
The IBM 80-column punched card format dominated the industry, becoming known as just IBM cards , even though other companies made cards and equipment to process them. [ 66 ]
One of the most common punched card formats is the IBM 5081 card format, a general purpose layout with no field divisions. This format has digits printed on it corresponding to the punch positions of the digits in each of the 80 columns. Other punched card vendors manufactured cards with this same layout and number.
Long cards were available with a scored stub on either end which, when torn off, left an 80 column card. The torn off card is called a stub card .
80-column cards were available scored, on either end, creating both a short card and a stub card when torn apart. Short cards can be processed by other IBM machines. [ 52 ] [ 67 ] A common length for stub cards was 51 columns. Stub cards were used in applications requiring tags, labels, or carbon copies. [ 52 ]
According to the IBM Archive: IBM's Supplies Division introduced the Port-A-Punch in 1958 as a fast, accurate means of manually punching holes in specially scored IBM punched cards. Designed to fit in the pocket, Port-A-Punch made it possible to create punched card documents anywhere. The product was intended for "on-the-spot" recording operations—such as physical inventories, job tickets and statistical surveys—because it eliminated the need for preliminary writing or typing of source documents. [ 68 ]
In 1969 IBM introduced a new, smaller, round-hole, 96-column card format along with the IBM System/3 low-end business computer. These cards have tiny, 1 mm diameter circular holes, smaller than those in paper tape . Data is stored in 6-bit BCD , with three rows of 32 characters each, or 8-bit EBCDIC . In this format, each column of the top tiers are combined with two punch rows from the bottom tier to form an 8-bit byte, and the middle tier is combined with two more punch rows, so that each card contains 64 bytes of 8-bit-per-byte binary coded data. [ 69 ] As in the 80 column card, readable text was printed in the top section of the card. There was also a 4th row of 32 characters that could be printed. This format was never widely used; it was IBM-only, but they did not support it on any equipment beyond the System/3, where it was quickly superseded by the 1973 IBM 3740 Data Entry System using 8-inch floppy disks .
The format was however recycled in 1978 when IBM re-used the mechanism in its IBM 3624 ATMs as print-only receipt printers.
The Powers/Remington Rand card format was initially the same as Hollerith's; 45 columns and round holes. In 1930, Remington Rand leap-frogged IBM's 80 column format from 1928 by coding two characters in each of the 45 columns – producing what is now commonly called the 90-column card. [ 32 ] : 142 There are two sets of six rows across each card. The rows in each set are labeled 0, 1/2, 3/4, 5/6, 7/8 and 9. The even numbers in a pair are formed by combining that punch with a 9 punch. Alphabetic and special characters use 3 or more punches. [ 70 ] [ 71 ]
The British Powers-Samas company used a variety of card formats for their unit record equipment . They began with 45 columns and round holes. Later 36, 40 and 65 column cards were provided. A 130 column card was also available – formed by dividing the card into two rows, each row with 65 columns and each character space with 5 punch positions. A 21 column card was comparable to the IBM Stub card. [ 54 ] : 47–51
Mark sense ( electrographic ) cards, developed by Reynold B. Johnson at IBM, [ 72 ] have printed ovals that could be marked with a special electrographic pencil. Cards would typically be punched with some initial information, such as the name and location of an inventory item. Information to be added, such as quantity of the item on hand, would be marked in the ovals. Card punches with an option to detect mark sense cards could then punch the corresponding information into the card.
Aperture cards have a cut-out hole on the right side of the punched card. A piece of 35 mm microfilm containing a microform image is mounted in the hole. Aperture cards are used for engineering drawings from all engineering disciplines. Information about the drawing, for example the drawing number, is typically punched and printed on the remainder of the card.
IBM's Fred M. Carroll [ 73 ] developed a series of rotary presses that were used to produce punched cards, including a 1921 model that operated at 460 cards per minute (cpm). In 1936 he introduced a completely different press that operated at 850 cpm. [ 23 ] [ 74 ] Carroll's high-speed press, containing a printing cylinder, revolutionized the company's manufacturing of punched cards. [ 75 ] It is estimated that between 1930 and 1950, the Carroll press accounted for as much as 25 percent of the company's profits. [ 22 ]
Discarded printing plates from these card presses, each printing plate the size of an IBM card and formed into a cylinder, often found use as desk pen/pencil holders, and even today are collectible IBM artifacts (every card layout [ 76 ] had its own printing plate).
In the mid-1930s a box of 1,000 cards cost $1.05 (equivalent to $24 in 2024). [ 77 ]
While punched cards have not been widely used for generations, the impact was so great for most of the 20th century that they still appear from time to time in popular culture. For example:
metaphor... symbol of the "system"—first the registration system and then bureaucratic systems more generally ... a symbol of alienation ... Punched cards were the symbol of information machines, and so they became the symbolic point of attack. Punched cards, used for class registration, were first and foremost a symbol of uniformity. .... A student might feel "he is one of out of 27,500 IBM cards" ... The president of the Undergraduate Association criticized the University as "a machine ... IBM pattern of education."... Robert Blaumer explicated the symbolism: he referred to the "sense of impersonality... symbolized by the IBM technology."...
A common example of the requests often printed on punched cards which were to be individually handled, especially those intended for the public to use and return is "Do Not Fold, Spindle or Mutilate" (in the UK "Do not bend, spike, fold or mutilate"). [ 29 ] : 43–55 Coined by Charles A. Phillips, [ 87 ] it became a motto [ 88 ] for the post– World War II era (even though many people had no idea what spindle meant), and was widely mocked and satirized. Some 1960s students at Berkeley wore buttons saying: "Do not fold, spindle or mutilate. I am a student". [ 89 ] The motto was also used for a 1970 book by Doris Miles Disney [ 90 ] with a plot based around an early computer dating service and a 1971 made-for-TV movie based on that book, and a similarly titled 1967 Canadian short film, Do Not Fold, Staple, Spindle or Mutilate .
Processing of punched cards was handled by a variety of machines, including: | https://en.wikipedia.org/wiki/Punched_card |
A computer punched card reader or just computer card reader is a computer input device used to read computer programs in either source or executable form and data from punched cards . A computer card punch is a computer output device that punches holes in cards. Sometimes computer punch card readers were combined with computer card punches and, later, other devices to form multifunction machines.
Many early computers, such as the ENIAC , and the IBM NORC , provided for punched card input/output. [ 1 ] Card readers and punches, either connected to computers or in off-line card to/from magnetic tape configurations, were ubiquitous through the mid-1970s.
Punched cards had been in use since the 1890s; their technology was mature and reliable. Card readers and punches developed for punched card machines were readily adaptable for computer use. [ 2 ] Businesses were familiar with storing data on punched cards and keypunch machines were widely employed. Punched cards were a better fit than other 1950s technologies, such as magnetic tape , for some computer applications because individual cards could easily be updated without having to access a computer. Also file drawers of punched cards served as a low-density offline storage medium for data.
The standard measure of speed is cards per minute , abbreviated CPM: The number of cards which can be read or punched in one minute. Card reader models vary from 150 to around 2,000 CPM. [ 3 ] [ 4 ] At 1200 CPM, i.e. 20 cards per second, this translates to 1,600 characters per second (CPS), assuming all 80 columns of each card encode information.
Early computer card readers were base on electromechanical unit record equipment and used mechanical brushes that make an electrical contact for a hole, and no contact if there was no hole. Later readers used photoelectric sensors to detect the presence or absence of a hole. Timing within each read cycle relates the resulting signals to the corresponding position on the card. Early readers read cards in parallel, row by row, following unit record practice (hence the orientation of the rectangular holes). Later, card readers that read cards serially, column by column became more common.
Card punches necessarily run more slowly to allow for the mechanical action of punching, up to around 300 CPM or 400 characters per second. [ 5 ]
Some card devices offer the ability to interpret , or print a line on the card displaying the data that is punched. Typically this slows down the punch operation. Many punches would read the card just punched and compare its actual contents to the original data punched, to protect against punch errors. Some devices allowed data to be read from a card and additional information to be punched into the same card.
Readers and punches include a hopper for input cards and one or more stacker bins to collect cards read or punched. A function called stacker select allows the controlling computer to choose which stacker a card just read or punched will be placed into.
Documation Inc. , of Melbourne, Florida, made card readers for minicomputers in the 1970s:
Their card readers have been used in elections, [ 11 ] including the 2000 "chads" election in Florida . [ 12 ]
For some computer applications, binary formats were used, where each hole represented a single binary digit (or " bit "), every column (or row) is treated as a simple bitfield, and every combination of holes is permitted. For example, the IBM 711 card reader used with the 704/709/7090/7094 series scientific computers treated every row as two 36-bit words, ignoring 8 columns. (The specific 72 columns used were selectable using a plugboard control panel, which is almost always wired to select columns 1–72.) Sometimes the ignored columns (usually 73–80) were used to contain a sequence number for each card, so the card deck could be sorted to the correct order in case it was dropped.
An alternative format, used by the IBM 704 's IBM 714 native card reader, is referred to as Column Binary or Chinese Binary, and used 3 columns for each 36-bit word. [ 14 ] Later computers, such as the IBM 1130 or System/360 , used every column. The IBM 1401 's card reader could be used in Column Binary mode, which stored two characters in every column, or one 36-bit word in three columns when used as input device for other computers. However, most of the older card punches were not intended to punch more than 3 holes in a column. The multipunch key is used to produce binary cards, or other characters not on the keypunch keyboard. [ 15 ]
As a prank , in binary mode, cards could be punched where every possible punch position had a hole. Such " lace cards " lacked structural strength, and would frequently buckle and jam inside the machine. [ 16 ] | https://en.wikipedia.org/wiki/Punched_card_input/output |
Punched tape or perforated paper tape is a form of data storage that consists of a long strip of paper through which small holes are punched. It was developed from and was subsequently used alongside punched cards , the difference being that the tape is continuous.
Punched cards, and chains of punched cards, were used for control of looms in the 18th century. Use for telegraphy systems started in 1842. Punched tapes were used throughout the 19th and for much of the 20th centuries for programmable looms, teleprinter communication, for input to computers of the 1950s and 1960s, and later as a storage medium for minicomputers and CNC machine tools . During the Second World War, high-speed punched tape systems using optical readout methods were used in code breaking systems. Punched tape was used to transmit data for manufacture of read-only memory chips.
Perforated paper tapes were first used by Basile Bouchon in 1725 to control looms. However, the paper tapes were expensive to create, fragile, and difficult to repair. By 1801, Joseph Marie Jacquard had developed machines to create paper tapes by tying punched cards in a sequence for Jacquard looms . The resulting paper tape, also called a "chain of cards", was stronger and simpler both to create and to repair. This led to the concept of communicating data not as a stream of individual cards, but as one "continuous card" (or tape). Paper tapes constructed from punched cards were widely used throughout the 19th century for controlling looms. Many professional embroidery operations still refer to those individuals who create the designs and machine patterns as punchers even though punched cards and paper tape were eventually phased out in the 1990s.
In 1842, a French patent by Claude Seytre described a piano playing device that read data from perforated paper rolls . By 1900, wide perforated music rolls for player pianos were used to distribute popular music to mass markets.
In 1846, Alexander Bain used punched tape to send telegrams . This technology was adopted by Charles Wheatstone in 1857 for the Wheatstone system used for the automated preparation, storage and transmission of data in telegraphy. [ 1 ] [ 2 ]
In the 1880s, Tolbert Lanston invented the Monotype typesetting system , which consisted of a keyboard and a composition caster . The tape, punched with the keyboard, was later read by the caster, which produced lead type according to the combinations of holes in up to 31 positions. The tape reader used compressed air, which passed through the holes and was directed into certain mechanisms of the caster. The system went into commercial use in 1897 and was in production well into the 1970s, undergoing several changes along the way.
In the 21st century, punched tape is obsolete except among hobbyists . In computer numerical control (CNC) machining applications, though paper tape has been superseded by digital memory , some modern systems still measure the size of stored CNC programs in feet or meters, corresponding to the equivalent length if the data were actually punched on paper tape. [ 3 ]
Data was represented by the presence or absence of a hole at a particular location. Tapes originally had five rows of holes for data across the width of the tape. Later tapes had more rows. A 1944 electro-mechanical programmable calculating machine, the Automatic Sequence Controlled Calculator or Harvard Mark I , used paper tape with 24 rows, [ 4 ] The IBM Selective Sequence Electronic Calculator (SSEC) used paper tape with 74 rows. [ 5 ] Australia's 1951 electronic computer, CSIRAC , used 3-inch (76 mm) wide paper tape with twelve rows. [ 6 ]
A row of smaller sprocket holes was always punched to be used to synchronize tape movement. Originally, this was done using a wheel with radial teeth called a sprocket wheel . Later, optical readers made use of the sprocket holes to generate timing pulses. The sprocket holes were slightly closer to one edge of the tape, dividing the tape into unequal widths, to make it unambiguous which way to orient the tape in the reader. The bits on the narrower width of the tape were generally the least significant bits when the code was represented as numbers in a digital system. [ 7 ]
Many early machines used oiled paper tape, which was pre-impregnated with a light machine oil , to lubricate the reader and punch mechanisms. The oil impregnation usually made the paper somewhat translucent and slippery, and excess oil could transfer to clothing or any surfaces it contacted. Later optical tape readers often specified non-oiled opaque paper tape, which was less prone to depositing oily debris on the optical sensors and causing read errors. Another innovation was fanfold paper tape, which was easier to store compactly and less prone to tangling, as compared to rolled paper tape.
For heavy-duty or repetitive use, polyester Mylar tape was often used. This tough, durable plastic film was usually thinner than paper tapes, but could still be used in many devices originally designed for paper media. The plastic tape was sometimes transparent, but usually was aluminized to make it opaque enough for use in high-speed optical readers.
Tape for punching was usually 0.00394 inches (0.100 mm) thick. The two most common widths were 11 ⁄ 16 inch (17 mm) for five bit codes, and 1 inch (25 mm) for tapes with six or more bits. Hole spacing was 0.1 inches (2.5 mm) in both directions. Data holes were 0.072 inches (1.8 mm) in diameter; sprocket feed holes were 0.046 inches (1.2 mm). [ 8 ]
Most tape-punching equipment used solid circular punches to create holes in the tape. This process created " chad ", or small circular pieces of paper. Managing the disposal of chad was an annoying and complex problem, as the tiny paper pieces had a tendency to escape containment and to interfere with the other electromechanical parts of the teleprinter equipment. Chad from oiled paper tape was particularly problematic, as it tended to clump and build up, rather than flowing freely into a collection container.
A variation on the tape punch was a device called a Chadless Printing Reperforator . This machine would punch a received teleprinter signal into tape and print the message on it at the same time, using a printing mechanism similar to that of an ordinary page printer. The tape punch, rather than punching out the usual round holes, would instead punch little U-shaped cuts in the paper, so that no chad would be produced; the "hole" was still filled with a little paper trap-door. By not fully punching out the hole, the printing on the paper remained intact and legible. This enabled operators to read the tape without having to decipher the holes, which would facilitate relaying the message on to another station in the network. Also, there was no "chad box" to empty from time to time.
A disadvantage to this technology was that, once punched, chadless tape did not roll up well for storage, because the protruding flaps of paper would catch on the next layer of tape so it could not be coiled up tightly. Another disadvantage that emerged in time, was that there was no reliable way to read chadless tape in later high-speed readers which used optical sensing. However, the mechanical tape readers used in most standard-speed equipment had no problem with chadless tape, because they sensed the holes by means of blunt spring-loaded mechanical sensing pins, which easily pushed the paper flaps out of the way.
Text was encoded in several ways. The earliest standard character encoding was Baudot , which dates back to the 19th century and had five holes. The Baudot code was superseded by modified five-hole codes such as the Murray code (which added carriage return and line feed ) which was developed into the Western Union code which was further developed into the International Telegraph Alphabet No. 2 (ITA 2), and a variant called the American Teletypewriter code (USTTY). [ 9 ] Other standards, such as Teletypesetter (TTS), FIELDATA and Flexowriter , had six holes. In the early 1960s, the American Standards Association led a project to develop a universal code for data processing, which became the American Standard Code for Information Interchange (ASCII). This seven-level code was adopted by some teleprinter users, including AT&T ( Teletype ). Others, such as Telex , stayed with the earlier codes.
Punched tape was used as a way of storing messages for teletypewriters . Operators typed in the message to the paper tape, and then sent the message at the maximum line speed from the tape. This permitted the operator to prepare the message "off-line" at the operator's best typing speed, and permitted the operator to correct any error prior to transmission. An experienced operator could prepare a message at 135 words per minute (WPM) or more for short periods.
The line typically operated at 75 WPM, but it operated continuously. By preparing the tape "off-line" and then sending the message with a tape reader, the line could operate continuously rather than depending on continuous "on-line" typing by a single operator. Typically, a single 75 WPM line supported three or more teletype operators working offline. Tapes punched at the receiving end could be used to relay messages to another station. Large store and forward networks were developed using these techniques.
Paper tape could be read into computers at up to 1,000 characters per second. [ 10 ] In 1963, a Danish company called Regnecentralen introduced a paper tape reader called RC 2000 that could read 2,000 characters per second; later they increased the speed further, up to 2,500 cps. As early as World War II , the Heath Robinson tape reader , used by Allied codebreakers, was capable of 2,000 cps while Colossus could run at 5,000 cps using an optical tape reader designed by Arnold Lynch.
When the first minicomputers were being released, most manufacturers turned to the existing mass-produced ASCII teleprinters (primarily the Teletype Model 33 , capable of ten ASCII characters per second throughput) as a low-cost solution for keyboard input and printer output. The commonly specified Model 33 ASR included a paper tape punch/reader, where ASR stands for "Automatic Send/Receive" as opposed to the punchless/readerless KSR – Keyboard Send/Receive and RO – Receive Only models. As a side effect, punched tape became a popular medium for low-cost minicomputer data and program storage, and it was common to find a selection of tapes containing useful programs in most minicomputer installations. Faster optical readers were also common.
Binary data transfer to or from these minicomputers was often accomplished using a doubly encoded technique to compensate for the relatively high error rate of punches and readers. The low-level encoding was typically ASCII, further encoded and framed in various schemes such as Intel Hex , in which a binary value of "01011010" would be represented by the ASCII characters "5A". Framing, addressing and checksum (primarily in ASCII hex characters) information helped with error detection. Efficiencies of such an encoding scheme are on the order of 35–40% (e.g., 36% from 44 8-bit ASCII characters being needed to represent sixteen bytes of binary data per frame).
In the 1970s, computer-aided manufacturing equipment often used paper tape. A paper tape reader was smaller and less expensive than Hollerith card or magnetic tape readers, and the medium was reasonably reliable in a manufacturing environment. Paper tape was an important storage medium for computer-controlled wire-wrap machines, for example.
Premium black waxed and lubricated long-fiber papers, and Mylar film tape were developed so that heavily used production tapes would last longer.
In the 1970s through the early 1980s, paper tape was commonly used to transfer binary data for incorporation in either mask-programmable read-only memory (ROM) chips or their erasable counterparts EPROMs . A significant variety of encoding formats were developed for use in computer and ROM/EPROM data transfer. [ 11 ] Encoding formats commonly used were primarily driven by those formats that EPROM programming devices supported and included various ASCII hex variants as well as a number of proprietary formats.
A much more primitive as well as a much longer high-level encoding scheme was also used, BNPF (Begin-Negative-Positive-Finish), [ 12 ] [ 13 ] also written as BPNF (Begin-Positive-Negative-Finish). [ 14 ] In BNPF encoding, a single byte (8 bits) would be represented by a highly redundant character framing sequence starting with a single uppercase ASCII "B", eight ASCII characters where a "0" would be represented by a "N" and a "1" would be represented by a "P", followed by an ending ASCII "F". [ 12 ] [ 14 ] [ 13 ] These ten-character ASCII sequences were separated by one or more whitespace characters , therefore using at least eleven ASCII characters for each byte stored (9% efficiency). The ASCII "N" and "P" characters differed in four bit positions, providing excellent protection from single punch errors. Alternative schemes named BHLF (Begin-High-Low-Finish) and B10F (Begin-One-Zero-Finish) were also available where either "L" and "H" or "0" and "1" were also available to represent data bits, [ 15 ] but in both of these encoding schemes, the two data-bearing ASCII characters differ in only one bit position, providing very poor single punch error detection.
NCR of Dayton, Ohio , made cash registers around 1970 that would punch paper tape. Sweda made similar cash registers around the same time. The tape could then be read into a computer and not only could sales information be summarized, billings could be done on charge transactions. The tape was also used for inventory tracking, recording department and class numbers of items sold.
Punched paper tape was used by the newspaper industry until the mid-1970s or later. Newspapers were typically set in hot lead by devices like Linotype machines . With the wire services coming into a device that would punch paper tape, rather than the Linotype operator having to retype all the incoming stories, the paper tape could be put into a paper tape reader on the Linotype and it would create the lead slugs without the operator re-typing the stories. This also allowed newspapers to use devices, such as the Friden Flexowriter , to convert typing to lead type via tape. Even after the demise of Linotype and hot lead typesetting, many early phototypesetter devices utilized paper tape readers.
If an error was found at one position on the six-level tape, that character could be turned into a null character to be skipped by punching out the remaining non-punched positions with what was known as a “chicken plucker". It looked like a strawberry stem remover that, pressed with thumb and forefinger, could punch out the remaining positions, one hole at a time.
Vernam ciphers were invented in 1917 to encrypt teleprinter communications using a key stored on paper tape. During the last third of the 20th century, the National Security Agency (NSA) used punched paper tape to distribute cryptographic keys . The eight-level paper tapes were distributed under strict accounting controls and read by a fill device , such as the hand held KOI-18 , that was temporarily connected to each security device that needed new keys. NSA has been trying to replace this method with a more secure electronic key management system ( EKMS ), but as of 2016 [update] , paper tape was apparently still being employed. [ 16 ] The paper tape canister is a tamper-resistant container that contains features to prevent undetected alteration of the contents.
Acid-free paper or Mylar tapes can be read many decades after manufacture, in contrast with magnetic tape that can deteriorate and become unreadable with time. The hole patterns of punched tape can be decoded by eye if necessary, and even editing of a tape is possible by manual cutting and splicing. Unlike magnetic tape, magnetic fields such as produced by electric motors cannot alter the punched data. [ 17 ] In cryptography applications, a punched tape used to distribute a key can be rapidly and completely destroyed by burning, preventing the key from falling into the hands of an enemy.
Reliability of paper tape punching operations was a concern, so that for critical applications a new punched tape could be read after punching to verify the correct contents. Rewinding a tape required a takeup reel or other measures to avoid tearing or tangling the tape. [ citation needed ] In some uses, "fan fold" tape simplified handling as the tape would refold into a "takeup tank" ready to be re-read. The information density of punched tape was low compared with magnetic tape, making large datasets clumsy to handle in punched tape form. | https://en.wikipedia.org/wiki/Punched_tape |
Punctuality is the characteristic of completing a required task or fulfilling an obligation before or at a previously designated time. based on job requirements and or daily operations [ 1 ] " Punctual " is often used synonymously with " on time ".
An opposite characteristic is tardiness .
Each culture tends to have its own understanding about what is considered an acceptable degree of punctuality. [ 2 ] Typically, a small amount of lateness is acceptable—this is commonly about five to ten minutes in most Western cultures—but this is context-dependent, for example it might not the case for doctor's appointments. [ 3 ]
Some cultures have an unspoken understanding that actual deadlines are different from stated deadlines, for example with African time were times for some types of casual or social events arrival time is implied. For example, it may be understood in a particular culture that people will turn up later than advertised. [ 4 ] In this case, since everyone understands that a 9 p.m. party will actually start at around 10 p.m., no-one is inconvenienced when everyone arrives at 10 p.m. [ 5 ]
In cultures that value punctuality, being late is seen as disrespectful of others' time and may be considered insulting. In such cases, punctuality may be enforced by social penalties, for example by excluding low-status latecomers from meetings entirely. Such considerations can lead on to considering the value [ clarification needed ] of punctuality in econometrics and to considering the effects of non-punctuality on others in queueing theory . [ citation needed ] | https://en.wikipedia.org/wiki/Punctuality |
Punctuated gradualism is a microevolutionary hypothesis that refers to a species that has "relative stasis over a considerable part of its total duration [and] underwent periodic, relatively rapid, morphologic change that did not lead to lineage branching". It is one of the three common models of evolution .
While the traditional model of paleontology , the phylogenetic model, posits that features evolved slowly without any direct association with speciation , the relatively newer and more controversial idea of punctuated equilibrium claims that major evolutionary changes don't happen over a gradual period but in localized, rare, rapid events of branching speciation.
Punctuated gradualism is considered to be a variation of these models, lying somewhere in between the phyletic gradualism model and the punctuated equilibrium model. It states that speciation is not needed for a lineage to rapidly evolve from one equilibrium to another but may show rapid transitions between long-stable states.
In 1983, Malmgren and colleagues published a paper called "Evidence for punctuated gradualism in the late Neogene Globorotalia tumida lineage of planktonic foraminifera." [ 1 ] This paper studied the lineage of planktonic foraminifera, specifically the evolutionary transition from G. plesiotumida to G. tumida across the Miocene/Pliocene boundary. [ 1 ] The study found that the G. tumida lineage, while remaining in relative stasis over a considerable part of its total duration underwent periodic, relatively rapid, morphologic change that did not lead to lineage branching. [ 1 ] Based on these findings, Malmgren and colleagues introduced a new mode of evolution and proposed to call it "punctuated gradualism." [ 1 ] There is strong evidence supporting both gradual evolution of a species over time and rapid events of species evolution separated by periods of little evolutionary change. Organisms have a great propensity to adapt and evolve depending on the circumstances.
Studies use evidence to predict how organisms evolved in the past and apply this evidence to the present. Both models of evolution can not only be seen between species, but also within a species. This is shown in a study done on the body size evolution in the radiolarian Pseudocubus vema. [ 2 ] This study presents evidence of a species exhibiting punctuated and gradual evolution, while also having periods of relative stasis. [ 2 ] Another study also used body size and looked at both micro-evolutionary patterns and fossil records. [ 3 ] The study uses quantitative data to make conclusions and is an example of another study using body size as an indicator of evolution. [ 3 ]
One study focuses on how efforts to apply only one mode of evolution to a phenomenon can be inaccurate. [ 4 ] It supports how difficult it can be to show that only one mode of evolution is at play at any given time. [ 4 ] Another study also displays the importance of considering both models. The study supports that there can always be both models at play at any time. [ 5 ] Another related study focuses on the extent of undefined area when trying to compare the two modes of evolution making it difficult to isolate one model. [ 6 ]
There will always be variance in environments. Some environments present challenges that require quick adaptation for survival, while others are relatively stable. In addition, organisms differ in the amount of traits upon which selection can act. These factors along with replication time can create barriers when working to prove a single mode of evolution as being accurate. A study expresses the importance of defining the clear objectives before research is done. The study directly challenges phyletic gradualism and punctuated equilibrium. It shows how many factors can come into play when comparing the two modes of evolution. [ 7 ]
Other evidence for the inclusion of both styles of evolution is the consideration of how organisms relate and may interact. Two species that diverged from each other over time may both still possess a characteristic that only one still uses. The species that doesn't use the characteristic might begin to use it for an alternate function, causing difficulty when trying to track evolution. [ 8 ] Fossils do not always show the evolution of function.
Another avenue in which evolutionary characteristics are studied is within cancer research . There are studies on many types of cancer where similarities and differences have been identified. One study compares phenotypic characteristics to genotypic characteristics. [ 9 ] The study concludes that genomic analysis supports both models and highlights the importance of studying the genotype, phenotype, and the relationship between the two. [ 9 ] One study looked at pancreatic cancer . [ 10 ] Pancreatic cancer is a rapidly progressing cancer. This study examines the punctuated genomic change that results in the rapid progression of this cancer. [ 10 ] Cancer studies are compared to analyze modes of evolution.
A similar study also looks at cancer to describe evolutionary change. This study challenges old conclusions and supports both models using more modern techniques providing current evidence for interpretation. [ 11 ] A study looks at breast cancer . This study focuses on genome analysis that some of the previous studies expressed the importance of doing. [ 12 ] The study highlights how dynamic the body can be during the progression of cancer. [ 12 ] The changes can be seen in cancer cells as they can show patters of punctuation, gradualism, and relative stasis. [ 12 ] | https://en.wikipedia.org/wiki/Punctuated_gradualism |
In topology , puncturing a manifold is removing a finite set of points from that manifold. [ 1 ] The set of points can be small as a single point. In this case, the manifold is known as once-punctured . With the removal of a second point, it becomes twice-punctured , and so on.
Examples of punctured manifolds include the open disk (which is a sphere with a single puncture), the cylinder (which is a sphere with two punctures), [ 1 ] and the Möbius strip (which is a projective plane with a single puncture). [ 2 ]
This topology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Puncture_(topology) |
Puncture resistance denotes the relative ability of a material or object to inhibit the intrusion of a foreign object. This is defined by a test method , regulation, or technical specification . It can be measured in several ways ranging from a slow controlled puncture to a rapid impact of a sharp object or a rounded probe.
Tests devised to measure puncture resistance are generally application-specific, covering items such as roofing [ 1 ] and packaging materials , protective gloves , needle disposal facilities , [ 2 ] bulletproof vests , tires , etc. Puncture resistance in fabrics can be obtained through very tight woven fabrics, small ceramic plates in fabric coating or tight woven fabrics with a coating of hard crystals. All described methods significantly reduce the softness and flexibility of the fabric.
The puncture resistance will depend on the nature of puncture attempt, with the two most important features being point sharpness and force. A fine sharp point such as a hypodermic needle will require a high ability to absorb and distribute the force to avoid penetration, but the total forces applied are still limited. The EN388 glove standard use a more pencil-like object with a flat tip of 1 mm diameter. The EN388 test is highly dependent on the materials ability to withstand high forces through high tenacity and to a lesser extent to avoid cut or separation of the material.
There is no or limited correlation between the protections provided in the low force/needle protection and the high force/pencil like EN388 test. [ citation needed ]
Needle-resistant materials as described above are generally pierced by a force between 2-10 N by a 25 gauge needle perpendicular to the fabric. The forces in the EN 388 test results are rated according to a score from 0-4 (0, <20 N; 1, 20 N; 2, 60 N; 3, 100 N; 4, >150 N). A newer test, ASTM F2878-10, is specifically designed to simulate common hypodermic needles in 21-, 25-, 28- gauge. | https://en.wikipedia.org/wiki/Puncture_resistance |
In coding theory , puncturing is the process of removing some of the parity bits after encoding with an error-correction code . This has the same effect as encoding with an error-correction code with a higher rate, or less redundancy. However, with puncturing the same decoder can be used regardless of how many bits have been punctured, thus puncturing considerably increases the flexibility of the system without significantly increasing its complexity.
A pre-defined pattern of puncturing is used in an encoder, in some cases. Then, the inverse operation, known as depuncturing, is implemented by the decoder.
Puncturing is used in UMTS during the rate matching process. It is also used in Wi-Fi , Wi-SUN, GPRS , EDGE , DVB-T and DAB , as well as in the DRM Standards.
Puncturing is often used with the Viterbi algorithm in coding systems.
During Radio Resource Control (RRC) Connection set procedure, during sending NBAP radio link setup message the uplink puncturing limit will send to NODE B, along with U/L spreading factor & U/L scrambling code. [ 1 ]
Puncturing was introduced by Gustave Solomon and J. J. Stiffler in 1964. [ 2 ] [ 3 ]
This article related to telecommunications is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Punctured_code |
The Punnett square is a square diagram that is used to predict the genotypes of a particular cross or breeding experiment. It is named after Reginald C. Punnett , who devised the approach in 1905. [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] The diagram is used by biologists to determine the probability of an offspring having a particular genotype . The Punnett square is a tabular summary of possible combinations of maternal alleles with paternal alleles. [ 9 ] These tables can be used to examine the genotypical outcome probabilities of the offspring of a single trait (allele), or when crossing multiple traits from the parents.
The Punnett square is a visual representation of Mendelian inheritance , a fundamental concept in genetics discovered by Gregor Mendel . [ 10 ] For multiple traits, using the "forked-line method" is typically much easier than the Punnett square. Phenotypes may be predicted with at least better-than-chance accuracy using a Punnett square, but the phenotype that may appear in the presence of a given genotype can in some instances be influenced by many other factors, as when polygenic inheritance and/or epigenetics are at work.
Zygosity refers to the grade of similarity between the alleles that determine one specific trait in an organism . In its simplest form, a pair of alleles can be either homozygous or heterozygous . Homozygosity, with homo relating to same while zygous pertains to a zygote , is seen when a combination of either two dominant or two recessive alleles code for the same trait. Recessive are always lowercase letters. For example, using 'A' as the representative character for each allele, a homozygous dominant pair's genotype would be depicted as 'AA', while homozygous recessive is shown as 'aa'. Heterozygosity, with hetero associated with different , can only be 'Aa' (the capital letter is always presented first by convention). The phenotype of a homozygous dominant pair is 'A', or dominant , while the opposite is true for homozygous recessive . Heterozygous pairs always have a dominant phenotype. [ 11 ] To a lesser degree, hemizygosity [ 12 ] and nullizygosity [ 13 ] can also be seen in gene pairs.
"Mono-" means "one"; this cross indicates that the examination of a single trait. This could mean (for example) eye color. Each genetic locus is always represented by two letters. So in the case of eye color, say "B = Brown eyes" and "b = green eyes".
In this example, both parents have the genotype Bb . For the example of eye color, this would mean they both have brown eyes. They can produce gametes that contain either the B or the b allele. (It is conventional in genetics to use capital letters to indicate dominant alleles and lower-case letters to indicate recessive alleles.) The probability of an individual offspring's having the genotype BB is 25%, Bb is 50%, and bb is 25%. The ratio of the phenotypes is 3:1, typical for a monohybrid cross . When assessing phenotype from this, "3" of the offspring have "Brown" eyes and only one offspring has "green" eyes. (3 are "B?" and 1 is "bb")
The way in which the B and b alleles interact with each other to affect the appearance of the offspring depends on how the gene products ( proteins ) interact (see Mendelian inheritance ). This can include lethal effects and epistasis (where one allele masks another, regardless of dominant or recessive status).
More complicated crosses can be made by looking at two or more genes. The Punnett square works, however, only if the genes are independent of each other, which means that having a particular allele of gene "A" does not alter the probability of possessing an allele of gene "B". This is equivalent to stating that the genes are not linked , so that the two genes do not tend to sort together during meiosis.
The following example illustrates a dihybrid cross between two double-heterozygote pea plants. R represents the dominant allele for shape (round), while r represents the recessive allele (wrinkled). A represents the dominant allele for color (yellow), while a represents the recessive allele (green). If each plant has the genotype RrAa , and since the alleles for shape and color genes are independent, then they can produce four types of gametes with all possible combinations: RA , Ra , rA , and ra .
Since dominant traits mask recessive traits (assuming no epistasis), there are nine combinations that have the phenotype round yellow, three that are round green, three that are wrinkled yellow, and one that is wrinkled green. The ratio 9:3:3:1 is the expected outcome when crossing two double-heterozygous parents with unlinked genes. Any other ratio indicates that something else has occurred (such as lethal alleles, epistasis, linked genes, etc.).
The forked-line method (also known as the tree method and the branching system) can also solve dihybrid and multi-hybrid crosses. A problem is converted to a series of monohybrid crosses, and the results are combined in a tree. However, a tree produces the same result as a Punnett square in less time and with more clarity. The example below assesses another double-heterozygote cross using RrYy x RrYy. As stated above, the phenotypic ratio is expected to be 9:3:3:1 if crossing unlinked genes from two double-heterozygotes. The genotypic ratio was obtained in the diagram below, this diagram will have more branches than if only analyzing for phenotypic ratio. | https://en.wikipedia.org/wiki/Punnett_square |
Punya Thitimajshima (9 November 1955 – 9 May 2006), a Thai professor in the department of telecommunications engineering at King Mongkut's Institute of Technology at Ladkrabang , was the co-inventor with Claude Berrou and Alain Glavieux of a groundbreaking coding scheme called turbo codes .
Thitimajshima was educated at King Mongkut's Institute of Technology at Ladkrabang , where he earned his bachelor's degree in control engineering and master's degree in electrical engineering . Later he went to École Nationale Supérieure des Télécommunications de Bretagne and Universite de Bretagne Occidentale in France , where he studied telecommunications engineering and received a doctoral degree in 1993 for a dissertation titled "Systematic recursive convolutional codes and their application to parallel concatenation." Thitimajshima joined the faculty of KMITL in 1995 as a lecturer, and became associate professor.
He was the recipient of the 1998 Golden Jubilee Award for Technological Innovation from the IEEE Information Theory Society together with Berrou and Glavieux. [ 1 ] In 2003 he received the Outstanding Technologist Award presented by the Foundation for Promotion of Science and Technology under the Patronage of His Majesty the King of Thailand. He died on 9 May 2006 at the age of 51 from illness.
This Thai biographical article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Punya_Thitimajshima |
Puppet-rearing is a method of breeding birds in captivity for reintroduction into the wild. Chicks are fed using puppets that simulate adults of their species, worn by caregivers whose bodies are hidden from view, thereby reducing the birds' direct contact with humans. [ 1 ] [ 2 ] [ 3 ]
Through imprinting , birds associate the first care images with their parents. In artificial incubation of eggs or orphaned chicks it is necessary to feed them by hand as long as they cannot do it themselves. For this reason, puppets are used to guarantee that the birds can be released later, having generated links with their own species and remaining distrustful of human beings. [ 4 ] [ 5 ] | https://en.wikipedia.org/wiki/Puppet-rearing |
Purchasing is the procurement process a business or organization uses to acquire goods or services to accomplish its goals. Although there are several organizations that attempt to set standards in the purchasing process, processes can vary greatly between organizations.
Purchasing is part of the wider procurement process, which typically also includes expediting , supplier quality, transportation, and logistics .
Purchasing managers/directors, procurement managers/directors, or staff based in an organization's Purchasing Office, [ 1 ] guide the organization's acquisition procedures and standards and operational purchasing activities.
Most organizations use a three-way check as the foundation of their purchasing programs. [ citation needed ] This involves three departments in the organization completing separate parts of the acquisition process. The three departments do not all report to the same senior manager, to prevent unethical practices and lend credibility to the process. These departments can be purchasing, receiving and accounts payable; or engineering, purchasing and accounts payable ; or a plant manager, purchasing and accounts payable. Combinations can vary significantly, but a purchasing department and accounts payable are usually two of the three departments involved. Organizations typically have simpler procedures in place for low value purchasing, for example the UK's Ministry of Defence has a separate internal policy for low value purchasing valued below £10,000. [ 2 ] When the receiving department is not involved, it is typically called a two-way check or two-way purchase order. In this situation, the purchasing department issues the purchase order receipt not required. When an invoice arrives against the order, the accounts payable department will then go directly to the requestor of the purchase order to verify that the goods or services were received. This is typically what is done for goods and services that will bypass the receiving department. A few examples are software delivered electronically, NRE work (non-reoccurring engineering services), consulting hours, etc.
Historically, the purchasing department issued purchase orders for supplies, services, equipment, and raw materials . Then, in an effort to decrease the administrative costs associated with the repetitive ordering of basic consumable items , "blanket" or "master" agreements were put into place. These types of agreements typically have a longer duration and increased scope to maximize the quantities of scale concept. When additional supplies were required, a simple release would be issued to the supplier to provide the goods or services.
Another method of decreasing administrative costs associated with repetitive contracts for common material, is the use of company credit cards, also known as " Purchasing Cards " or simply "P-Cards". P-card programs vary, but all of them have internal checks and audits to ensure appropriate use. Purchasing managers realized once contracts for the low dollar value consumables are in place, procurement can take a smaller role in the operation and use of the contracts. There is still oversight in the forms of audits and monthly statement reviews, but most of their time is now available to negotiate major purchases and setting up of other long-term contracts. These contracts are typically renewable annually.
This trend away from the daily procurement function (tactical purchasing) resulted in several changes in the industry. The first was the reduction of personnel. Purchasing departments were now smaller. There was no need for the army of clerks processing orders for individual parts as in the past. Another change was the focus on negotiating contracts and procurement of large capital equipment. Both of these functions permitted purchasing departments to make the biggest financial contribution to the organization. A new term and job title emerged – Strategic sourcing and Sourcing Managers. These professionals not only focused on the bidding process and negotiating with suppliers , but the entire supply function . In these roles they were able to add value and maximize savings for organizations. This value was manifested in lower inventories , fewer personnel, and getting the end product to the consumer quicker. Purchasing managers' success in these roles resulted in new assignments outside to the traditional purchasing function – logistics, materials management, distribution, and warehousing. More and more purchasing managers were becoming Supply Chain Managers handling additional functions of their organization's operation. Purchasing managers were not the only ones to become Supply Chain Managers. Logistic managers, material managers, distribution managers, etc. all rose to the broader function and some had responsibility for the purchasing functions now.
In accounting , purchases is the amount of goods a company bought throughout this year. It also refers to information as to the kind, quality, quantity, and cost of goods bought that should be maintained. They are added to inventory . Purchases are offset by purchase discounts and Purchase Returns and Allowances . When it should be added depends on the Free On Board (FOB) policy of the trade. For the purchaser, this new inventory is added on shipment if the policy was FOB shipping point, and the seller remove this item from its inventory. On the other hand, the purchaser added this inventory on receipt if the policy was FOB destination, and the seller remove this item from its inventory when it was delivered.
Goods bought for the purpose other than direct selling, such as for Research and Development , are added to inventory and allocated to Research and Development expense as they are used. On a side note, equipment bought for Research and Development are not added to inventory, but are capitalized as assets .
The revised acquisition process for major systems in the U.S. Department of Defense is shown in the next figure. The process is defined by a series of phases during which technology is defined and matured into viable concepts, which are subsequently developed and readied for production, after which the systems produced are supported in the field. [ 3 ]
The process allows for a given system to enter the process at any of the development phases. For example, a system using unproven technology would enter at the beginning stages of the process and
would proceed through a lengthy period of technology maturation, while a system based on mature
and proven technologies might enter directly into engineering development or, conceivably, even production. The process itself includes four phases of development: [ 3 ]
This is the process where the organization identifies potential suppliers for specified supplies, services or equipment. These suppliers' credentials and history are analyzed, with the products or services they offer. The bidder selection process varies from organization to organization, but can include running credit reports, interviewing management, testing products, and touring facilities. This process is not always done in order of importance, but rather in order of expense. Often purchasing managers research potential bidders obtaining information on the organizations and products from media sources and their own industry contacts. Additionally, purchasing might send Request for Information (RFI) to potential suppliers to help gather information. Engineering would also inspect sample products to determine if the company or organisation can produce products they need. If the bidder passes both of these stages engineering may decide to do some testing on the materials to further verify quality standards. These tests can be expensive and involve significant time of multiple technicians and engineers. Engineering management must make this decision based on the cost of the products they are likely to procure, the importance of the bidders’ product to production, and other factors. Credit checks, interviewing management, touring plants as well as other steps could all be utilized if engineering, manufacturing, and supply chain managers decide they could help their decision and the cost is justifiable.
Other organizations might have minority procurement goals to consider in selection of bidders. Organizations identify goals in the use of woman-owned or minority-owned businesses . Significant utilizing of minority suppliers may qualify the firm as a potential bidder for a contract with a company or governmental entity looking to increase their minority supplier programs.
This selection process can include or exclude international suppliers depending on organizational goals and criteria. Companies looking to increase their pacific rim supplier base may exclude suppliers from the Americas, Europe, and Australia. Other organizations may be looking to purchase domestically to ensure a quicker response to orders as well as easier collaboration on design and production.
Organizational goals will dictate the criteria for the selection process of bidders. It is also possible that the product or service being procured is so specialized that the number of bidders are limited and the criteria must be very wide to permit competition. If only one firm can meet the specifications for the product then the purchasing managers must consider utilizing a “Sole Source” option or work with engineering to broaden the specifications if the project will permit alteration in the specifications. The sole source option is the part of the selection of bidders that acknowledges there is sometimes only one reasonable supplier for some services or products. This can be because of the limited applications for the product cannot support more than one manufacturer, proximity of the service provided, or the products are newly designed or invented and competition is not yet available.
This is the process an organization utilizes to procure goods, services or equipment. Processes vary significantly from the stringent to the very informal. Large corporations and governmental entities are most likely to have stringent and formal processes. These processes can utilize specialized bid forms that require specific procedures and detail. The very stringent procedures require bids to be open by several staff from various departments to ensure fairness and impartiality. Responses are usually very detailed. Bidders not responding exactly as specified and following the published procedures can be disqualified. Smaller private businesses are more likely to have less formal procedures. Bids can be in the form of an email to all of the bidders specifying products or services. Responses by bidders can be detailed or just the proposed dollar amount.
Most bid processes are multi-tiered. Acquisitions under a specified dollar amount can be “user discretion” permitting the request or to choose who ever they want. This level can be as low as $100 or as high as $10,000 depending on the organization. The rationale is the savings realized by processing these request the same as expensive items is minimal and does not justify the time and expense. Purchasing departments watch for abuses of the user discretion privilege. Acquisitions in a mid range can be processed with a slightly more formal process. This process may involve the user providing quotes from three separate suppliers. Purchasing may be asked or required to obtain the quotes. The formal bid process starts as low as $10,000 or as high as $100,000 depending on the organization. The bid usually involves a specific form the bidder fills out and must be returned by a specified deadline. Depending on the product being purchased and the organization the bid may specify a weighted evaluation criterion. Other bids would be evaluated at the discretion of purchasing or the end users. Some bids could be evaluated by a cross-functional committee. Other bids may be evaluated by the end user or the buyer in purchasing. Especially in small, private firms the bidders could be evaluated on criteria or factors that have little if anything to do with the actual bid. Examples of these factors are history of the bidder with the company, history of the bidder with the company's senior management at other firms, and bidder's breadth of products.
Technical evaluations, evaluations of the technical suitability of the quoted goods or services, if required, are normally performed prior to the commercial evaluation. During this phase of the procurement process, a technical representative of the company (usually an engineer) will review the proposal and designate each bidder as either technically acceptable or technically unacceptable.
Technical evaluation is usually carried out against a set of pre-determined technical evaluation criteria. There are two types of criteria, general criteria (whereby scores are given if they are met) and essential criteria (failing of which shall render the bid technically disqualified).
Cost of Money is calculated by multiplying the applicable currency's interest rate by the amount of money paid prior to the receipt of the goods. If the money was to have remained in the buyer's account, interest would be drawn. That interest is essentially an additional cost associated with such progress or milestone payments.
The manufacturing location is taken into consideration during the evaluation stage primarily to calculate for freight costs and regional issues. For instance, it is common in Europe for factories to close during the month of August for a summer holiday. Labor agreements may also be taken into consideration and may be drawn into the evaluation if the particular region is known to have frequent labor disputes.
The manufacturing lead-time is the time from the placement of the order (or time final drawings are submitted by the buyer to the seller) until the goods are manufactured and prepared for delivery. Lead-times vary by product and can range from several days to years.
Transportation time is evaluated while comparing the delivery of goods to the buyer's required use-date. If goods are shipped from a remote port, with infrequent vessel transportation, the transportation time could exceed the schedule and adjustments would need to be made.
Delivery Charges - the charge for the goods to be delivered to a stated point.
Negotiating is a key skillset in the purchasing field. One of the goals of purchasing agents is to acquire goods per the most advantageous terms of the buying entity (or simply, the "buyer"). Purchasing agents typically attempt to decrease costs while meeting the buyer's other requirements such as an on-time delivery, compliance to the commercial terms and conditions (including the warranty , the transfer of risk, assignment, auditing rights, confidentiality , remedies, etc.).
Good negotiators, those with high levels of documented "cost savings", receive a premium within the industry relative to their compensation . Depending on the employment agreement between the buyer and the employer, buyer's cost savings can result in the creation of value to the business, and may result in a flat-rate bonus, or a percentage payout to the purchasing agent of the documented cost savings.
Purchasing departments, while they can be considered as a support function of the key business, are actually revenue generating departments. For example, if the company needs to buy US$30 million of widgets and the purchasing department secures the widgets for $25M USD, the purchasing department would have saved the company $5M USD. That savings could exceed the annual budget of the department, which in effect would pay the department's overhead - the employee's salaries, computers, office space, etc.
Post-award administration typically consists of making minor changes, additions or subtractions, that in some way change the terms of the agreement or the seller's scope of supply. Such changes are often minor, but for auditing purposes must be documented into the existing agreement. Examples include increasing the quantity of a line Item or changing the metallurgy of a particular component.
Is the closing of order. [ clarification needed ] | https://en.wikipedia.org/wiki/Purchasing |
Purchasing power parity ( PPP ) [ 1 ] is a measure of the price of specific goods in different countries and is used to compare the absolute purchasing power of the countries' currencies . PPP is effectively the ratio of the price of a market basket at one location divided by the price
of the basket of goods at a different location. The PPP inflation and exchange rate may differ from the market exchange rate because of tariffs , and other transaction costs . [ 2 ]
The purchasing power parity indicator can be used to compare economies regarding their gross domestic product (GDP), labour productivity and actual individual consumption, and in some cases to analyse price convergence and to compare the cost of living between places. [ 2 ] The calculation of the PPP, according to the OECD , is made through a basket of goods that contains a "final product list [that] covers around 3,000 consumer goods and services, 30 occupations in government, 200 types of equipment goods and about 15 construction projects". [ 2 ]
Purchasing power parity is an economic term for measuring prices at different locations. It is based on the law of one price , which says that, if there are no transaction costs nor trade barriers for a particular good, then the price for that good should be the same at every location. [ 1 ] Ideally, a computer in New York and in Hong Kong should have the same price. If its price is 500 US dollars in New York and the same computer costs 2,000 HK dollars in Hong Kong, PPP theory says the exchange rate should be 4 HK dollars for every 1 US dollar.
Poverty, tariffs, transportation, and other frictions prevent the trading and purchasing of various goods, so measuring a single good can cause a large error. The PPP term accounts for this by using a basket of goods , that is, many goods with different quantities. PPP then computes an inflation and exchange rate as the ratio of the price of the basket in one location to the price of the basket in the other location. For example, if a basket consisting of 1 computer, 1 ton of rice, and half a ton of steel was 1000 US dollars in New York and the same goods cost 6000 HK dollars in Hong Kong, the PPP exchange rate would be 6 HK dollars for every 1 US dollar.
The name purchasing power parity comes from the idea that, with the right exchange rate, consumers in every location will have the same purchasing power .
The value of the PPP exchange rate is very dependent on the basket of goods chosen. In general, goods are chosen that might closely obey the law of one price. Thus, one attempts to select goods which are traded easily and are commonly available in both locations. Organizations that compute PPP exchange rates use different baskets of goods and can come up with different values.
The PPP exchange rate may not match the market exchange rate. The market rate is more volatile because it reacts to changes in demand at each location. Also, tariffs and differences in the price of labour (see Balassa–Samuelson theorem ) can contribute to longer-term differences between the two rates. One use of PPP is to predict longer-term exchange rates.
Because PPP exchange rates are more stable and are less affected by tariffs, they are used for many international comparisons, such as comparing countries' GDPs or other national income statistics. These numbers often come with the label PPP-adjusted .
There can be marked differences between purchasing power adjusted incomes and those converted via market exchange rates. [ 3 ] A well-known purchasing power adjustment is the Geary–Khamis dollar (the GK dollar or international dollar ). The World Bank's World Development Indicators 2005 estimated that in 2003, one Geary–Khamis dollar was equivalent to about 1.8 Chinese yuan by purchasing power parity [ 4 ] —considerably different from the nominal exchange rate. This discrepancy has large implications; for instance, when converted via the nominal exchange rates, GDP per capita in India is about US$ 1,965 [ 5 ] while on a PPP basis, it is about Int$7,197. [ 6 ] At the other extreme, Denmark's nominal GDP per capita is around US$53,242, but its PPP figure is Int$46,602, in line with other developed nations .
There are variations in calculating PPP. The EKS method (developed by Ö. Éltető, P. Köves and B. Szulc) uses the geometric mean of the exchange rates computed for individual goods. [ 7 ] The EKS-S method (by Éltető, Köves, Szulc, and Sergeev) uses two different baskets, one for each country, and then averages the result. While these methods work for 2 countries, the exchange rates may be inconsistent if applied to 3 countries, so further adjustment may be necessary so that the rate from currency A to B times the rate from B to C equals the rate from A to C.
Relative PPP is a weaker statement based on the law of one price, covering changes in the exchange rate and inflation rates. It seems to mirror the exchange rate closer than PPP does. [ 8 ]
Purchasing power parity exchange rate is used when comparing national production and consumption and other places where the prices of non-traded goods are considered important. (Market exchange rates are used for individual goods that are traded). PPP rates are more stable over time and can be used when that attribute is important.
PPP exchange rates help costing but exclude profits and above all do not consider the different quality of goods among countries. The same product, for instance, can have a different level of quality and even safety in different countries, and may be subject to different taxes and transport costs. Since market exchange rates fluctuate substantially, when the GDP of one country measured in its own currency is converted to the other country's currency using market exchange rates, one country might be inferred to have higher real GDP than the other country in one year but lower in the other. Both of these inferences would fail to reflect the reality of their relative levels of production .
If one country's GDP is converted into the other country's currency using PPP exchange rates instead of observed market exchange rates, the false inference will not occur. Essentially GDP measured at PPP controls for the different costs of living and price levels, usually relative to the United States dollar, enabling a more accurate estimate of a nation's level of production.
The exchange rate reflects transaction values for traded goods between countries in contrast to non-traded goods, that is, goods produced for home-country use. Also, currencies are traded for purposes other than trade in goods and services, e.g. , to buy capital assets whose prices vary more than those of physical goods. Also, different interest rates , speculation , hedging or interventions by central banks can influence the purchasing power parity of a country in the international markets.
The PPP method is used as an alternative to correct for possible statistical bias. The Penn World Table is a widely cited source of PPP adjustments, and the associated Penn effect reflects such a systematic bias in using exchange rates to outputs among countries.
For example, if the value of the Mexican peso falls by half compared to the US dollar , the Mexican gross domestic product measured in dollars will also halve. However, this exchange rate results from international trade and financial markets. It does not necessarily mean that Mexicans are poorer by a half; if incomes and prices measured in pesos stay the same, they will be no worse off assuming that imported goods are not essential to the quality of life of individuals.
Measuring income in different countries using PPP exchange rates helps to avoid this problem, as the metrics give an understanding of relative wealth regarding local goods and services at domestic markets. On the other hand, it is poor for measuring the relative cost of goods and services in international markets. The reason is it does not take into account how much US$1 stands for in a respective country. Using the above-mentioned example: in an international market, Mexicans can buy less than Americans after the fall of their currency, though their GDP PPP changed a little.
PPP exchange rates are never valued because market exchange rates tend to move in their general direction, over a period of years. There is some value to knowing in which direction the exchange rate is more likely to shift over the long run.
In neoclassical economic theory , the purchasing power parity theory assumes that the exchange rate between two currencies actually observed in the different international markets is the one that is used in the purchasing power parity comparisons, so that the same amount of goods could actually be purchased in either currency with the same beginning amount of funds. Depending on the particular theory, purchasing power parity is assumed to hold either in the long run or, more strongly, in the short run . Theories that invoke purchasing power parity assume that in some circumstances a fall in either currency's purchasing power (a rise in its price level) would lead to a proportional decrease in that currency's valuation on the foreign exchange market.
PPP exchange rates are especially useful when official exchange rates are artificially manipulated by governments. Countries with strong government control of the economy sometimes enforce official exchange rates that make their own currency artificially strong. By contrast, the currency's black market exchange rate is artificially weak. In such cases, a PPP exchange rate is likely the most realistic basis for economic comparison. Similarly, when exchange rates deviate significantly from their long term equilibrium due to speculative attacks or carry trade, a PPP exchange rate offers a better alternative for comparison.
In 2011, the Big Mac Index was used to identify manipulation of inflation numbers by Argentina . [ 9 ]
The PPP exchange-rate calculation is controversial because of the difficulties of finding comparable baskets of goods to compare purchasing power across countries. [ 10 ]
Estimation of purchasing power parity is complicated by the fact that countries do not simply differ in a uniform price level ; rather, the difference in food prices may be greater than the difference in housing prices, while also less than the difference in entertainment prices. People in different countries typically consume different baskets of goods. It is necessary to compare the cost of baskets of goods and services using a price index . This is a difficult task because purchasing patterns and even the goods available to purchase differ across countries.
Thus, it is necessary to make adjustments for differences in the quality of goods and services. Furthermore, the basket of goods representative of one economy will vary from that of another: Americans eat more bread; Chinese more rice. Hence a PPP calculated using the US consumption as a base will differ from that calculated using China as a base. Additional statistical difficulties arise with multilateral comparisons when (as is usually the case) more than two countries are to be compared.
Various ways of averaging bilateral PPPs can provide a more stable multilateral comparison, but at the cost of distorting bilateral ones. These are all general issues of indexing; as with other price indices there is no way to reduce complexity to a single number that is equally satisfying for all purposes. Nevertheless, PPPs are typically robust in the face of the many problems that arise in using market exchange rates to make comparisons.
For example, in 2005 the price of a gallon of gasoline in Saudi Arabia was US$0.91, and in Norway the price was US$6.27. [ 11 ] The significant differences in price would not contribute to accuracy in a PPP analysis, despite all of the variables that contribute to the significant differences in price. More comparisons have to be made and used as variables in the overall formulation of the PPP.
When PPP comparisons are to be made over some interval of time, proper account needs to be made of inflationary effects.
In addition to methodological issues presented by the selection of a basket of goods, PPP estimates can also vary based on the statistical capacity of participating countries. The International Comparison Program (ICP), which PPP estimates are based on, require the disaggregation of national accounts into production, expenditure or (in some cases) income, and not all participating countries routinely disaggregate their data into such categories.
Some aspects of PPP comparison are theoretically impossible or unclear. For example, there is no basis for comparison between the Ethiopian labourer who lives on teff with the Thai labourer who lives on rice , because teff is not commercially available in Thailand and rice is not in Ethiopia, so the price of rice in Ethiopia or teff in Thailand cannot be determined. As a general rule, the more similar the price structure between countries, the more valid the PPP comparison.
PPP levels will also vary based on the formula used to calculate price matrices. Possible formulas include GEKS-Fisher, Geary-Khamis, IDB, and the superlative method. Each has advantages and disadvantages.
Linking regions presents another methodological difficulty. In the 2005 ICP round, regions were compared by using a list of some 1,000 identical items for which a price could be found for 18 countries, selected so that at least two countries would be in each region. While this was superior to earlier "bridging" methods, which do not fully take into account differing quality between goods, it may serve to overstate the PPP basis of poorer countries, because the price indexing on which PPP is based will assign to poorer countries the greater weight of goods consumed in greater shares in richer countries.
There are a number of reasons that different measures do not perfectly reflect standard of living . In 2011, interviewed by the Financial Times , a spokesperson for the IMF declared: [ 12 ]
The IMF considers that GDP in purchase-power-parity (PPP) terms is not the most appropriate measure for comparing the relative size of countries to the global economy, because PPP price levels are influenced by nontraded services, which are more relevant domestically than globally. The IMF believes that GDP at market rates is a more relevant comparison.
The goods that the currency has the "power" to purchase are a basket of goods of different types:
The more that a product falls into category 1, the further its price will be from the currency exchange rate , moving towards the PPP exchange rate. Conversely, category 2 products tend to trade close to the currency exchange rate. (See also Penn effect ).
More processed and expensive products are likely to be tradable , falling into the second category, and drifting from the PPP exchange rate to the currency exchange rate. Even if the PPP "value" of the Ethiopian currency is three times stronger than the currency exchange rate, it will not buy three times as much of internationally traded goods like steel, cars and microchips, but non-traded goods like housing, services ("haircuts"), and domestically produced crops. The relative price differential between tradables and non-tradables from high-income to low-income countries is a consequence of the Balassa–Samuelson effect and gives a big cost advantage to labour-intensive production of tradable goods in low income countries (like Ethiopia ), as against high income countries (like Switzerland ).
The corporate cost advantage is nothing more sophisticated than access to cheaper workers, but because the pay of those workers goes farther in low-income countries than high, the relative pay differentials (inter-country) can be sustained for longer than would be the case otherwise. (This is another way of saying that the wage rate is based on average local productivity and that this is below the per capita productivity that factories selling tradable goods to international markets can achieve.) An equivalent cost benefit comes from non-traded goods that can be sourced locally (nearer the PPP-exchange rate than the nominal exchange rate in which receipts are paid). These act as a cheaper factor of production than is available to factories in richer countries. It is difficult by GDP PPP to consider the different quality of goods among the countries.
The Bhagwati–Kravis–Lipsey view provides a somewhat different explanation from the Balassa–Samuelson theory. This view states that price levels for nontradables are lower in poorer countries because of differences in endowment of labor and capital, not because of lower levels of productivity. Poor countries have more labor relative to capital, so marginal productivity of labor is greater in rich countries than in poor countries. Nontradables tend to be labor-intensive; therefore, because labor is less expensive in poor countries and is used mostly for nontradables, nontradables are cheaper in poor countries. Wages are high in rich countries, so nontradables are relatively more expensive. [ 13 ]
PPP calculations tend to overemphasise the primary sectoral contribution, and underemphasise the industrial and service sectoral contributions to the economy of a nation.
The law of one price is weakened by transport costs and governmental trade restrictions, which make it expensive to move goods between markets located in different countries. Transport costs sever the link between exchange rates and the prices of goods implied by the law of one price. As transport costs increase, the larger the range of exchange rate fluctuations. The same is true for official trade restrictions because the customs fees affect importers' profits in the same way as shipping fees. According to Krugman and Obstfeld, "Either type of trade impediment weakens the basis of PPP by allowing the purchasing power of a given currency to differ more widely from country to country." [ 13 ] They cite the example that a dollar in London should purchase the same goods as a dollar in Chicago, which is certainly not the case.
Nontradables are primarily services and the output of the construction industry. Nontradables also lead to deviations in PPP because the prices of nontradables are not linked internationally. The prices are determined by domestic supply and demand, and shifts in those curves lead to changes in the market basket of some goods relative to the foreign price of the same basket. If the prices of nontradables rise, the purchasing power of any given currency will fall in that country. [ 13 ]
Linkages between national price levels are also weakened when trade barriers and imperfectly competitive market structures occur together. Pricing to market occurs when a firm sells the same product for different prices in different markets. This is a reflection of inter-country differences in conditions on both the demand side ( e.g. , virtually no demand for pork in Islamic states) and the supply side ( e.g. , whether the existing market for a prospective entrant's product features few suppliers or instead is already near-saturated). According to Krugman and Obstfeld, this occurrence of product differentiation and segmented markets results in violations of the law of one price and absolute PPP. Over time, shifts in market structure and demand will occur, which may invalidate relative PPP. [ 13 ]
Measurement of price levels differ from country to country. Inflation data from different countries are based on different commodity baskets; therefore, exchange rate changes do not offset official measures of inflation differences. Because it makes predictions about price changes rather than price levels, relative PPP is still a useful concept. However, change in the relative prices of basket components can cause relative PPP to fail tests that are based on official price indexes. [ 13 ]
The global poverty line is a worldwide count of people who live below an international poverty line , referred to as the dollar-a-day line. This line represents an average of the national poverty lines of the world's poorest countries , expressed in international dollars. These national poverty lines are converted to international currency and the global line is converted back to local currency using the PPP exchange rates from the ICP. PPP exchange rates include data from the sales of high end non-poverty related items which skews the value of food items and necessary goods which is 70 percent of poor peoples' consumption. [ 14 ] Angus Deaton argues that PPP indices need to be reweighted for use in poverty measurement; they need to be redefined to reflect local poverty measures, not global measures, weighing local food items and excluding luxury items that are not prevalent or are not of equal value in all localities. [ 15 ]
The idea can be traced to the 16th-century School of Salamanca . In 1802 , Henry Thornton "was the first economist to clearly explain the operation of the self-adjusting mechanism that keeps the exchange rate close to its purchasing power par". In 1807 , John Wheatley "extended" Thornton's analysis, producing an "extreme monetarist version of the PPP doctrine", which was "adhered to" by David Ricardo , Walter Boyd , and the "famous Bullion Report (1810)". [ 16 ] In 1912 , Ludwig von Mises "provided a modern 'purchasing power parity' theory of exchange rates". [ 17 ] In 1913 , Ralph Hawtrey gave a "a terse, and precise, statement of the purchasing-power-parity doctrine." [ 18 ]
In spite of the above antecedents, Gustav Cassel is often credited for developing the idea of PPP, especially in his two 1916 Economic Journal papers, both titled "The Present Situation of the Foreign Exchanges". In 1918, he introduced the phrase "purchasing power parity" in " Abnormal Deviations in International Exchanges " (also in The Economic Journal ). [ 19 ] [ 20 ] While Gustav Cassel's use of PPP concept has been traditionally interpreted as his attempt to formulate a positive theory of exchange rate determination, the policy and theoretical context in which Cassel wrote about exchange rates suggests different interpretation. In the years immediately preceding the end of WWI and following it economists and politicians were involved in discussions on possible ways of restoring the gold standard , which would automatically restore the system of fixed exchange rates among participating nations. [ 21 ]
The stability of exchange rates was widely believed to be crucial for restoring the international trade and for its further stable and balanced growth. Nobody then was mentally prepared for the idea that flexible exchange rates determined by market forces do not necessarily cause chaos and instability in the peaceful time (and that is what the abandoning of the gold standard during the war was blamed for). Gustav Cassel was among those who supported the idea of restoring the gold standard, although with some alterations. The question, which Gustav Cassel tried to answer in his works written during that period, was not how exchange rates are determined in the free market, but rather how to determine the appropriate level at which exchange rates were to be fixed during the restoration of the system of fixed exchange rates. [ 21 ]
His recommendation was to fix exchange rates at the level corresponding to the PPP, as he believed that this would prevent trade imbalances between trading nations. Thus, PPP doctrine proposed by Cassel was not really a positive (descriptive) theory of exchange rate determination (as Cassel was perfectly aware of numerous factors that prevent exchange rates from stabilizing at PPP level if allowed to float), but rather a normative (prescriptive) policy advice, formulated in the context of discussions on returning to the gold standard. [ 21 ]
Each month, the Organisation for Economic Co-operation and Development (OECD) measures the differences in price levels between its member countries by calculating the ratios of PPPs for private final consumption expenditure to exchange rates. The OECD table below indicates the number of US dollars needed in each of the countries listed to buy the same representative basket of consumer goods and services that would cost US$100 in the United States.
According to the table, an American living or travelling in Switzerland on an income denominated in US dollars would find that country to be the most expensive of the group, having to spend 27% more US dollars to maintain a standard of living comparable to the US in terms of consumption .
Since global PPP estimates—such as those provided by the ICP—are not calculated annually, but for a single year, PPP exchange rates for years other than the benchmark year need to be extrapolated. [ 24 ] One way of doing this is by using the country's GDP deflator . To calculate a country's PPP exchange rate in Geary–Khamis dollars for a particular year, the calculation proceeds in the following manner: [ 25 ]
Where PPPrate X,i is the PPP exchange rate of country X for year i, PPPrate X,b is the PPP exchange rate of country X for the benchmark year, PPPrate U,b is the PPP exchange rate of the United States (US) for the benchmark year (equal to 1), GDPdef X,i is the GDP deflator of country X for year i, GDPdef X,b is the GDP deflator of country X for the benchmark year, GDPdef U,i is the GDP deflator of the US for year i, and GDPdef U,b is the GDP deflator of the US for the benchmark year.
The bank UBS produces its "Prices and Earnings" report every three years. The 2012 report says, "Our reference basket of goods is based on European consumer habits and includes 122 positions". [ 26 ]
To teach PPP, the basket of goods is often simplified to a single good.
The Big Mac Index is a simple implementation of PPP where the basket contains a single good: a Big Mac burger from McDonald's restaurants. The index was created and popularized by The Economist in 1986 as a way to teach economics and to identify over- and under-valued currencies. [ 27 ]
The Big Mac has the value of being a relatively standardized consumer product that includes input costs from a wide range of sectors in the local economy, such as agricultural commodities (beef, bread, lettuce, cheese), labor (blue and white collar), advertising, rent and real estate costs, transportation, etc.
There are some problems with the Big Mac Index. A Big Mac is perishable and not easily transported. That means the law of one price is not likely to keep prices the same in different locations. McDonald's restaurants are not present in every country, which limits the index's usage. Moreover, Big Macs are not sold at every McDonald's ( notably in India ), which limits its usage further. [ 28 ]
In the white paper, "Burgernomics", the authors computed a correlation of 0.73 between the Big Mac Index's prices and prices calculated using the Penn World Tables. This single-good index captures most, but not all, of the effects captured by more professional (and more complex) PPP measurement. [ 8 ]
The Economist uses The Big Mac Index to identify overvalued and undervalued currencies. That is, ones where the Big Mac is expensive or cheap, when measured using current exchange rates. The January 2019 article states that a Big Mac costs HK$20.00 in Hong Kong and US$5.58 in the United States. [ 29 ] The implied PPP exchange rate is 3.58 HK$ per US$. The difference between this and the actual exchange rate of 7.83 suggests that the Hong Kong dollar is 54.2% undervalued. That is, it is cheaper to convert US dollars into Hong Kong dollars and buy a Big Mac in Hong Kong than it is to buy a Big Mac directly in US dollars. [ citation needed ]
Similar to the Big Mac Index , the KFC Index measures PPP with a basket that contains a single item: a KFC Original 12/15 pc. bucket. The Big Mac Index cannot be used for most countries in Africa because most do not have a McDonald's restaurant. Thus, the KFC Index was created by Sagaci Research (a market research firm focusing solely on Africa) to identify over- and under-valued currencies in Africa.
For example, the average price of KFC's Original 12 pc. Bucket in the United States in January 2016 was $20.50; while in Namibia it was only $13.40 at market exchange rates. Therefore, the index states the Namibian dollar was undervalued by 33% at that time.
Like the Big Mac Index , the iPad index (elaborated by CommSec ) compares an item's price in various locations. Unlike the Big Mac , however, each iPad is produced in the same place (except for the model sold in Brazil) and all iPads (within the same model) have identical performance characteristics. Price differences are therefore a function of transportation costs, taxes, and the prices that may be realized in individual markets. In 2013, an iPad cost about twice as much in Argentina as in the United States.
Consumer price index (CPI) and purchasing power parity (PPP) conversion factors share conceptual similarities. [ 34 ] The CPI measures differences in levels of prices of goods and services over time within a country, whereas PPPs measure the change in levels of prices across regions within a country. | https://en.wikipedia.org/wiki/Purchasing_power_parity |
In solid mechanics , pure bending (also known as the theory of simple bending ) is a condition of stress where a bending moment is applied to a beam without the simultaneous presence of axial , shear , or torsional forces .
Pure bending occurs only under a constant bending moment ( M ) since the shear force ( V ), which is equal to d M d x , {\displaystyle {\tfrac {dM}{dx}},} has to be equal to zero. In reality, a state of pure bending does not practically exist , because such a state needs an absolutely weightless member. The state of pure bending is an approximation made to derive formulas.
Notes: 1 Homogeneous means the material is of same kind throughout. 2 Isotropic means that the elastic properties in all directions are equal. | https://en.wikipedia.org/wiki/Pure_bending |
A pure fusion weapon is a hypothetical hydrogen bomb design that does not need a fission "primary" explosive to ignite the fusion of deuterium and tritium , two heavy isotopes of hydrogen used in fission-fusion thermonuclear weapons . Such a weapon would require no fissile material and would therefore be much easier to develop in secret than existing weapons. Separating weapons-grade uranium (U-235) or breeding plutonium (Pu-239) requires a substantial and difficult-to-conceal industrial investment, and blocking the sale and transfer of the needed machinery has been the primary mechanism to control nuclear proliferation to date. [ 1 ]
All current thermonuclear weapons use a fission bomb as a first stage to create the high temperatures and pressures necessary to start a fusion reaction between deuterium and tritium in a second stage. For many years, nuclear weapon designers have researched whether it is possible to create high enough temperatures and pressures inside a confined space to ignite a fusion reaction, without using fission. Pure fusion weapons offer the possibility of generating arbitrarily small nuclear yields because no critical mass of fissile fuel need be assembled for detonation, as with a conventional fission primary needed to spark a fusion explosion. There is also the advantage of reduced collateral damage stemming from fallout because these weapons would not create the highly radioactive byproducts made by fission-type weapons. These weapons would be lethal not only because of their explosive force, which could be large compared to bombs based on chemical explosives, but also because of the neutrons they generate.
While various neutron source devices have been developed, some of them based on fusion reactions, none of them are able to produce a net energy yield, either in controlled form for energy production or uncontrolled for a weapon.
Despite the many millions of dollars spent by the U.S. between 1952 and 1992 to produce a pure fusion weapon, no measurable success was ever achieved. In 1998, the U.S. Department of Energy (DOE) released a restricted data declassification decision stating that even if the DOE made a substantial investment in the past to develop a pure fusion weapon, "the U.S. is not known to have and is not developing a pure fusion weapon and no credible design for a pure fusion weapon resulted from the DOE investment". The power densities needed to ignite a fusion reaction still seem attainable only with the aid of a fission explosion, or with large apparatus such as powerful lasers like those at the National Ignition Facility , the Sandia Z-pinch machine , or various magnetic tokamaks . Regardless of any claimed advantages of pure fusion weapons, building those weapons does not appear to be feasible using currently available technologies and many [ who? ] have expressed concern that pure fusion weapons research and development would subvert the intent of the Nuclear Non-Proliferation Treaty and the Comprehensive Test Ban Treaty .
It has been claimed that it is possible to conceive of a crude, deliverable, pure fusion weapon, using only present-day, unclassified technology. The weapon design [ 2 ] weighs approximately 3 tonnes, and might have a total yield of approximately 3 tonnes of TNT. The proposed design uses a large explosively pumped flux compression generator to produce the high power density required to ignite the fusion fuel. From the point of view of explosive damage, such a weapon would have no clear advantages over a conventional explosive, but the massive neutron flux could deliver a lethal dose of radiation to humans within a 500-meter radius (most of those fatalities would occur over a period of months, rather than immediately).
Some researchers have examined the use of antimatter [ 3 ] as an alternative fusion trigger, mainly in the context of antimatter-catalyzed nuclear pulse propulsion but also nuclear weapons. [ 4 ] [ 5 ] [ 6 ] Such a system, in a weapons context, would have many of the desired properties of a pure fusion weapon. The technical barriers to producing and containing the required quantities of antimatter appear formidable, well beyond present capabilities.
Induced gamma emission is another approach that is currently being researched. Very high energy-density chemicals such as ballotechnics and others have also been suggested as a means of triggering a pure fusion weapon. [ citation needed ]
Nuclear isomers have also been investigated for use in pure fusion weaponry. Hafnium and tantalum isomers can be induced to emit very strong gamma radiation . Gamma emission from these isomers may have enough energy to start a thermonuclear reaction, without requiring any fissile material. [ citation needed ] | https://en.wikipedia.org/wiki/Pure_fusion_weapon |
Pure inductive logic ( PIL ) is the area of mathematical logic concerned with the philosophical and mathematical foundations of probabilistic inductive reasoning . It combines classical predicate logic and probability theory ( Bayesian inference ). Probability values are assigned to sentences of a first-order relational language to represent degrees of belief that should be held by a rational agent. Conditional probability values represent degrees of belief based on the assumption of some received evidence.
PIL studies prior probability functions on the set of sentences and evaluates the rationality of such prior probability functions
through principles that such functions should arguably satisfy. Each of the principles directs the function to assign probability values and conditional probability values to sentences in some respect rationally. Not all desirable principles of PIL are compatible, so no prior probability function exists that satisfies them all. Some prior probability functions however are distinguished through satisfying an important collection of principles.
Inductive logic started to take a clearer shape in the early 20th century in the work of William Ernest Johnson and John Maynard Keynes , and was further developed by Rudolf Carnap . Carnap introduced the distinction between pure and applied inductive logic, [ 1 ] and the modern Pure Inductive Logic evolves along the lines of the pure, uninterpreted approach envisaged by Carnap.
In its basic form, PIL uses first-order logic without equality , with the usual connectives ∧ , ∨ , ¬ , → {\displaystyle \wedge ,\vee ,\neg ,\to } ( and, or, not and implies respectively), quantifiers ∃ , ∀ , {\displaystyle \exists ,\forall ,} finitely many predicate (relation) symbols, and countably many constant symbols a 1 , a 2 , a 3 , … {\displaystyle a_{1},a_{2},a_{3},\ldots \,} .
There are no function symbols. The predicate symbols can be unary, binary or of higher arities . The finite set of predicate symbols may vary while the rest of the language is fixed. It is a convention to refer to the language as L {\displaystyle L} and write
where the R i {\displaystyle R_{i}} list the predicate symbols.
The set of all sentences is denoted S L {\displaystyle SL} . If a sentence is written with constants appearing in it listed then it is assumed that the list includes at least all those that appear. T L {\displaystyle {\cal {T}}L} is the set of structures for L {\displaystyle L} with universe { a 1 , a 2 , a 3 , … } {\displaystyle \{a_{1},a_{2},a_{3},\ldots \}} and with each constant symbol a i {\displaystyle a_{i}} interpreted as itself.
A probability function for sentences of L {\displaystyle L} is a function w {\displaystyle w} with domain S L {\displaystyle SL} and values in the unit interval [ 0 , 1 ] {\displaystyle [0,1]} satisfying the following conditions:
This last condition, which goes beyond the standard Kolmogorov axioms (for finite additivity) is referred to as Gaifman 's Axiom and it is intended to capture the idea that the a i {\displaystyle a_{i}} exhaust the universe.
For a probability function w {\displaystyle w} and a sentence ϕ {\displaystyle \phi } with w ( ϕ ) > 0 {\displaystyle w(\phi )>0} , the corresponding conditional probability function w ( . | ϕ ) {\displaystyle w(\,.|\,\phi )} is defined by
Unlike belief functions in many valued logics , it is not the case that the probability value of a compound sentence is determined by the probability values of its components. Probability respects the classical semantics: logically equivalent sentences must be given the same probability. Hence logically equivalent sentences are often identified.
A state description for a finite set of constants is a conjunction of atomic sentences (predicates or their negations) instantiated exclusively by these constants, such that for any eligible atomic sentence either it or its negation (but not both) appears in the conjunction.
Any probability function is uniquely determined by its values on state descriptions. To define a probability function, it suffices to specify nonnegative values of all state descriptions for a 1 , … , a n {\displaystyle a_{1},\ldots ,a_{n}} (for all n {\displaystyle n} ) so that the values of all state descriptions for a 1 , … , a n , a n + 1 {\displaystyle a_{1},\ldots ,a_{n},a_{n+1}} extending a given state description for a 1 , … , a n {\displaystyle a_{1},\ldots ,a_{n}} sum to the value of the state description they all extend, with the convention that the (only) state description for no constants is a tautology and that has value 1 {\displaystyle 1} .
If Θ {\displaystyle \Theta } is a state description for a set of constants including a i , a j {\displaystyle a_{i},a_{j}} then it is said that a i , a j {\displaystyle a_{i},a_{j}} are indistinguishable in Θ {\displaystyle \Theta } , a i ∼ Θ a j {\displaystyle a_{i}\sim _{\Theta }a_{j}} , just when upon adding equality to the language (and axioms of equality to the logic) the sentence Θ ∧ a i = a j {\displaystyle \Theta \wedge a_{i}=a_{j}} is consistent. ∼ Θ {\displaystyle \,\sim _{\Theta }} is an equivalence relation .
In the special case of Unary PIL, all the predicates R 1 , … , R q {\displaystyle R_{1},\ldots ,R_{q}} are unary. Formulae of the form
where ± R {\displaystyle \pm R} stands for one of R {\displaystyle R} , ¬ R {\displaystyle \neg R} , are called atoms. It is assumed that they are listed in some fixed order as β 1 , β 2 , … , β 2 q {\displaystyle \beta _{1},\beta _{2},\ldots ,\beta _{2^{q}}} .
A state description specifies an atom for each constant involved in it, and it can be written as a conjunction of these atoms instantiated by the corresponding constants. Two constants are indistinguishable in the state description if it specifies the same atom for both of them.
Assume a rational agent inhabits a structure in T L {\displaystyle {\cal {T}}L} but knows nothing about which one it is. What probability function w {\displaystyle w} should s/he adopt when w ( θ ) {\displaystyle w(\theta )} is to represent his/her degree of belief that a sentence θ {\displaystyle \theta } is true in this ambient structure?
The following principles have been proposed as desirable properties of a rational prior probability function w {\displaystyle w} for L {\displaystyle L} .
The constant exchangeability principle, Ex. The probability of a sentence θ ( a 1 , a 2 , … , a m ) {\displaystyle \theta (a_{1},a_{2},\ldots ,a_{m})} does not change when the a 1 , a 2 , … , a m {\displaystyle a_{1},a_{2},\ldots ,a_{m}} in it are replaced by any other m {\displaystyle m} -tuple of (distinct) constants.
The principle of predicate exchangeability, Px. If R , R ′ {\displaystyle R,R'} are predicates of the same arity then
for a sentence θ {\displaystyle \theta } ,
where θ ′ {\displaystyle \theta '} is the result of simultaneously replacing R {\displaystyle R} by R ′ {\displaystyle R'} and R ′ {\displaystyle R'} by R {\displaystyle R} throughout θ {\displaystyle \theta } .
The strong negation principle, SN. For a predicate R {\displaystyle R} and sentence θ {\displaystyle \theta } ,
where θ ′ {\displaystyle \theta '} is the result of simultaneously replacing R {\displaystyle R} by ¬ R {\displaystyle \neg R} and ¬ R {\displaystyle \neg R} by R {\displaystyle R} throughout θ {\displaystyle \theta } .
The principle of regularity, Reg. If a quantifier-free sentence θ {\displaystyle \theta } is satisfiable then w ( θ ) > 0 {\displaystyle w(\theta )>0} .
The principle of super regularity (universal certainty), SReg. If a sentence θ {\displaystyle \theta } is satisfiable then w ( θ ) > 0 {\displaystyle w(\theta )>0} .
The constant irrelevance principle, IP. If sentences θ , ϕ {\displaystyle \theta ,\phi } have no constants in common then w ( θ ∧ ϕ ) = w ( θ ) ⋅ w ( ϕ ) {\displaystyle w(\theta \wedge \phi )=w(\theta )\cdot w(\phi )} .
The weak irrelevance principle, WIP. If sentences θ , ϕ {\displaystyle \theta ,\phi } have no constants nor predicates in common then w ( θ ∧ ϕ ) = w ( θ ) ⋅ w ( ϕ ) {\displaystyle w(\theta \wedge \phi )=w(\theta )\cdot w(\phi )} .
Language invariance principle, Li. There is a family of probability functions w J {\displaystyle w^{J}} , one on each
language J {\displaystyle J} , all satisfying Px and Ex, and such that w L = w {\displaystyle w^{L}=w} and
if all predicates of J {\displaystyle J} belong also to K {\displaystyle K} then w J {\displaystyle w^{J}} and w K {\displaystyle w^{K}} agree on sentences of J {\displaystyle J} .
The (strong) counterpart principle, CP. If θ , θ ′ {\displaystyle \theta ,\theta '} are sentences such that θ ′ {\displaystyle \theta '} is the result of replacing some constant/relation symbols in θ {\displaystyle \theta } by new constant/relation symbols of the same arity not occurring in θ {\displaystyle \theta } then
(SCP) If moreover θ ″ {\displaystyle \theta ''} is the result of replacing the same and possibly also additional constant/relation symbols in θ {\displaystyle \theta } by new
constant/relation symbols of the same arity not occurring in θ {\displaystyle \theta } then
The Invariance Principle, INV. If F {\displaystyle F} is an isomorphism of the Lindenbaum-Tarski algebra of sentences of L {\displaystyle L} supported by some permutation μ {\displaystyle \mu } of T L {\displaystyle {\cal {T}}L} in the sense that for sentences θ , ϕ {\displaystyle \theta ,\phi } ,
then w ( θ ) = w ( ϕ ) {\displaystyle w(\theta )=w(\phi )} .
The Permutation Invariance Principle, PIP. As INV except that F {\displaystyle F} is additionally required to map ( equivalence classes of) state descriptions to (equivalence classes of) state descriptions.
The Spectrum Exchangeability Principle, Sx. The probability w ( Θ ) {\displaystyle w(\Theta )} of a state description Θ {\displaystyle \Theta } depends only on the spectrum of Θ {\displaystyle \Theta } , that is, on the multiset of sizes of equivalence classes with respect to the equivalence relation ∼ Θ {\displaystyle \sim _{\Theta }} .
Li with Sx. As the Language Invariance Principle but all the probability functions in the family also satisfy Spectrum Exchangeability.
The Principle of Induction, PI. Let Θ {\displaystyle \Theta } be a state description and a k {\displaystyle a_{k}} a constant not appearing in Θ {\displaystyle \Theta } . Let Φ {\displaystyle \Phi } , Ψ {\displaystyle \Psi } be state descriptions extending Θ {\displaystyle \Theta } to include (just) a k {\displaystyle a_{k}} . If a k {\displaystyle a_{k}} is ∼ Φ {\displaystyle \sim _{\Phi }} -equivalent to some and at least as many constants as it is ∼ Ψ {\displaystyle \sim _{\Psi }} -equivalent to then w ( Φ ∣ Θ ) ≥ w ( Ψ ∣ Θ ) {\displaystyle w(\Phi \mid \Theta )\geq w(\Psi \mid \Theta )} .
The Principle of Instantial Relevance, PIR. For a sentence θ {\displaystyle \theta } , atom β {\displaystyle \beta } and constants a k , a m {\displaystyle a_{k},a_{m}} not appearing in θ {\displaystyle \theta } ,
The Generalized Principle of Instantial Relevance, GPIR. For quantifier-free sentences ψ ( a k ) , ϕ ( a m ) , θ {\displaystyle \psi (a_{k}),\phi (a_{m}),\theta } with constants a k , a m {\displaystyle a_{k},a_{m}} not appearing in θ {\displaystyle \theta } , if ψ ( x ) ⊨ ϕ ( x ) {\displaystyle \psi (x)\models \phi (x)} then
Johnson Sufficientness Principle, JSP. For a state description Θ {\displaystyle \Theta } for n {\displaystyle n} constants, atom β {\displaystyle \beta } and constant a k {\displaystyle a_{k}} not appearing in Θ {\displaystyle \Theta } , the probability
depends only on n {\displaystyle n} and on the number of constants for which Θ {\displaystyle \Theta } specifies β {\displaystyle \beta } .
The Principle of Atom Exchangeability, Ax. If τ {\displaystyle \tau } is a permutation of { 1 , 2 , … , 2 q } {\displaystyle \{1,2,\ldots ,2^{q}\}} and Θ {\displaystyle \Theta } is a state description expressed as a conjunction of instantiated atoms then w ( Θ ) = w ( Θ ′ ) {\displaystyle w(\Theta )=w(\Theta ')} where Θ ′ {\displaystyle \Theta '} obtains from Θ {\displaystyle \Theta } upon replacing each β i {\displaystyle \beta _{i}} by β τ ( i ) {\displaystyle \beta _{\tau (i)}} .
Reichenbach's Axiom, RA. Let β h i {\displaystyle \beta _{h_{i}}} for i = 1 , 2 , 3 , … {\displaystyle i=1,2,3,\ldots } be an infinite sequence of
atoms and β {\displaystyle \beta } an atom. Then as n {\displaystyle n} tends to ∞ {\displaystyle \infty } , the difference between the conditional probability
and the proportion of occurrences of β {\displaystyle \beta } amongst the β h 1 , β h 2 , … , β h n {\displaystyle \beta _{h_{1}},\beta _{h_{2}},\ldots ,\beta _{h_{n}}} tends to 0 {\displaystyle 0} .
Principle of Induction for Unary languages, UPI. For a state description Θ {\displaystyle \Theta } , atoms β i , β j {\displaystyle \beta _{i},\beta _{j}} and constant a k {\displaystyle a_{k}} not appearing in Θ {\displaystyle \Theta } , if Θ {\displaystyle \Theta } specifies β i {\displaystyle \beta _{i}} for at least as many constants as β j {\displaystyle \beta _{j}} then
Recovery. Whenever Ψ ( a 1 , a 2 , … , a n ) {\displaystyle \Psi (a_{1},a_{2},\ldots ,a_{n})} is a state description then there is another state description Φ ( a n + 1 , a n + 2 , … , a h ) {\displaystyle \Phi (a_{n+1},a_{n+2},\ldots ,a_{h})} such that w ( Φ ∧ Ψ ) ≠ 0 {\displaystyle w(\Phi \wedge \Psi )\neq 0} and for any quantifier-free sentence θ ( a h + 1 , a h + 2 , … , a h + g ) {\displaystyle \theta (a_{h+1},a_{h+2},\ldots ,a_{h+g})} ,
Unary Language Invariance Principle, ULi. As Li, but with the languages restricted to the unary ones.
ULi with Ax. As ULi but with all the probability functions in the family also satisfying Atom Exchangeability.
Sx implies Ex, Px and SN.
PIP + Ex implies Sx.
INV implies PIP and Ex.
Li implies CP and SCP.
Li with Sx implies PI.
Ex implies PIR.
Ax is equivalent to PIP.
Ax+Ex implies UPI.
Ax+Ex is equivalent to Sx.
ULi with Ax implies Li with Sx.
Functions V M {\displaystyle V_{M}} . For a given structure M ∈ T L {\displaystyle M\in {\cal {T}}L} and θ ∈ S L {\displaystyle \theta \in SL} ,
Functions ω Ψ {\displaystyle \omega ^{\Psi }} . For a given state description Ψ ( a 1 , a 2 , … , a K ) {\displaystyle \Psi (a_{1},a_{2},\ldots ,a_{K})} , ω Ψ {\displaystyle \,\omega ^{\Psi }} is defined via specifying its values for state descriptions as follows. ω Ψ ( Θ ( a 1 , a 2 , … , a n ) ) {\displaystyle \,\omega ^{\Psi }(\Theta (a_{1},a_{2},\ldots ,a_{n}))} is the probability that when a h 1 , a h 2 , … , a h n {\displaystyle a_{h_{1}},a_{h_{2}},\ldots ,a_{h_{n}}} are randomly picked from { a 1 , … , a K } {\displaystyle \{a_{1},\ldots ,a_{K}\}} , with replacement and according to the uniform distribution, then Ψ ( a 1 , … , a K ) ⊨ Θ ( a h 1 , a h 2 , … , a h n ) . {\displaystyle \Psi (a_{1},\ldots ,a_{K})\models \Theta (a_{h_{1}},a_{h_{2}},\ldots ,a_{h_{n}}).}
Functions ∘ ( ω Ψ ) {\displaystyle ^{\circ }\!(\omega ^{\Psi })} . As above but employing a non-standard universe (starting with a possibly non-standard state description Ψ {\displaystyle \Psi } ) to obtain the standard ∘ ( ω Ψ ) {\displaystyle ^{\circ }\!(\omega ^{\Psi })} .
∙ {\displaystyle \bullet } The ∘ ( ω Ψ ) {\displaystyle ^{\circ }\!(\omega ^{\Psi })} are the only probability functions that satisfy Ex and IP.
Functions u p ¯ {\displaystyle u^{\overline {p}}} . For a given infinite sequence p ¯ = ⟨ p 0 , p 1 , p 2 , p 3 , … ⟩ {\displaystyle {\overline {p}}=\langle p_{0},p_{1},p_{2},p_{3},\ldots \rangle } of non-negative real numbers such that
u p ¯ {\displaystyle u^{\overline {p}}} is defined via specifying its values for state descriptions as follows:
For a sequence c → = ⟨ c 1 , c 2 , … , c n ⟩ {\displaystyle {\vec {c}}=\langle c_{1},c_{2},\ldots ,c_{n}\rangle } of natural numbers and a state description Θ ( a 1 , a 2 , … , a n ) {\displaystyle \Theta (a_{1},a_{2},\ldots ,a_{n})} , Θ {\displaystyle \Theta } is consistent with c → {\displaystyle {\vec {c}}} if whenever c s = c t ≠ 0 {\displaystyle c_{s}=c_{t}\neq 0} then a s ∼ Θ a t {\displaystyle a_{s}\sim _{\Theta }a_{t}} . C ( c → ) {\displaystyle C({\vec {c}})} is the number of state descriptions for a 1 , a 2 , … , a n {\displaystyle a_{1},a_{2},\ldots ,a_{n}} consistent with c → {\displaystyle {\vec {c}}} . u p ¯ ( Θ ) {\displaystyle \,u^{\overline {p}}(\Theta )} is the sum over those c → {\displaystyle {\vec {c}}} with which Θ {\displaystyle \Theta } is compatible, of
∙ {\displaystyle \bullet } The u p ¯ {\displaystyle u^{\overline {p}}} are the only probability functions that satisfy WIP and Li with Sx. (The language invariant family witnessing Li with Sx consists of the functions u p ¯ , J {\displaystyle u^{{\overline {p}},J}} with fixed p ¯ {\displaystyle {\overline {p}}} , where u p ¯ , J {\displaystyle u^{{\overline {p}},J}} is as u p ¯ {\displaystyle u^{\overline {p}}} but defined with language J {\displaystyle J} .)
Functions w {\displaystyle w} c → {\displaystyle {\vec {c}}} . For a vector c → = ⟨ c 1 , c 2 , … , c 2 q ⟩ {\displaystyle {\vec {c}}=\langle c_{1},c_{2},\ldots ,c_{2^{q}}\rangle } of non-negative real numbers summing to one, w {\displaystyle w} c → {\displaystyle {\vec {c}}} is defined via specifying its values for state descriptions as follows:
where m j {\displaystyle m_{j}} the is number of constants for which Θ {\displaystyle \Theta } specifies β j {\displaystyle \beta _{j}} .
∙ {\displaystyle \bullet } The w {\displaystyle w} c → {\displaystyle {\vec {c}}} are the only probability functions that satisfy Ex and IP (they are also expressible as ∘ ( w Ψ ) {\displaystyle ^{\circ }\!(w^{\Psi })} ).
Carnap continuum functions c λ . {\displaystyle c_{\lambda }.\,} For λ > 0 {\displaystyle \lambda >0} , the probability function c λ {\displaystyle c_{\lambda }} is uniquely determined by the values
where Θ {\displaystyle \Theta } is a state description for n {\displaystyle n} constants not including a k {\displaystyle a_{k}} and m j {\displaystyle m_{j}} is the number of constants for which Θ {\displaystyle \Theta } specifies β j {\displaystyle \beta _{j}} .
Furthermore, c ∞ {\displaystyle c_{\infty }} is the probability function that assigns 2 − n q {\displaystyle 2^{-nq}} to every state description for n {\displaystyle n} constants and c 0 {\displaystyle c_{0}} is the probability function that assigns 2 − q {\displaystyle 2^{-q}} to any state description in which all constants are indistinguishable, 0 {\displaystyle 0} to any other state description.
∙ {\displaystyle \bullet } The c λ {\displaystyle c_{\lambda }} are the only probability functions that satisfy Ex and JSP.
∙ {\displaystyle \bullet } They also satisfy Li – the functions c λ J {\displaystyle c_{\lambda }^{J}} with fixed λ {\displaystyle \lambda } , where c λ J {\displaystyle c_{\lambda }^{J}} is as c λ {\displaystyle c_{\lambda }} but defined with language J {\displaystyle J} provide the unary language-invariant family members.
Functions w δ {\displaystyle w^{\delta }} .
For − ( 2 q − 1 ) − 1 ≤ δ ≤ 1 {\displaystyle -(2^{q}-1)^{-1}\leq \delta \leq 1} , w δ {\displaystyle w^{\delta }} is the average of the 2 q {\displaystyle 2^{q}} functions w {\displaystyle w} c → {\displaystyle {\vec {c}}} where c → {\displaystyle {\vec {c}}} has all but one coordinate equal to each other with the odd coordinate differing from them by δ {\displaystyle \delta } , so
where e i → = ⟨ γ , γ , … , γ , γ + δ , γ , … , γ ⟩ {\displaystyle {\vec {e_{i}}}=\langle \gamma ,\gamma ,\ldots ,\gamma ,\gamma +\delta ,\gamma ,\ldots ,\gamma \rangle ~} , ( γ + δ {\displaystyle \gamma +\delta } in i {\displaystyle i} th place) and γ = 2 − q ( 1 − δ ) {\displaystyle \gamma =2^{-q}(1-\delta )} .
For 0 ≤ δ ≤ 1 {\displaystyle 0\leq \delta \leq 1} , the w δ {\displaystyle w^{\delta }} are equal to u p ¯ {\displaystyle u^{\bar {p}}} for
and as such they satisfy Li.
∙ {\displaystyle \bullet } The w δ {\displaystyle w^{\delta }} are the only functions that satisfy GPIR, Ex, Ax and Reg.
∙ {\displaystyle \bullet } The w δ {\displaystyle w^{\delta }} with 0 ≤ δ < 1 {\displaystyle 0\leq \delta <1} are the only functions that satisfy Recovery, Reg and ULi with Ax.
A representation theorem for a class of probability functions provides means of expressing every probability function in the class
in terms of generic, relatively simple probability functions from the same class.
Representation Theorem for all probability functions . Every probability function w {\displaystyle w} for L {\displaystyle L} can be represented as
where μ {\displaystyle \mu } is a σ {\displaystyle \sigma } -additive measure on the σ {\displaystyle \sigma } -algebra of subsets of T L {\displaystyle {\cal {T}}L} generated by the sets
Representation Theorem for Ex (employing non-standard analysis and Loeb Integration Theory [ 2 ] ). Every probability function w {\displaystyle w} for L {\displaystyle L} satisfying Ex can be represented as
where A {\displaystyle A} is an internal set of state descriptions for a 1 , a 2 , … , a ν {\displaystyle a_{1},a_{2},\ldots ,a_{\nu }} (with ν {\displaystyle \nu } a fixed infinite natural number) and μ {\displaystyle \mu } is a σ {\displaystyle \sigma } -additive measure on a σ {\displaystyle \sigma } -algebra of subsets of A {\displaystyle A} .
Representation Theorem for Li with Sx . Every probability function w {\displaystyle w} for L {\displaystyle L} satisfying Li with Sx can be represented as
where B {\displaystyle {\mathbb {B} }} is the set of sequences
of non-negative reals summing to 1 {\displaystyle 1} and such that p 1 ≥ p 2 ≥ p 3 ≥ … ≥ 0 {\displaystyle p_{1}\geq p_{2}\geq p_{3}\geq \ldots \,\geq 0\,} and μ {\displaystyle \mu } is a σ {\displaystyle \sigma } -additive measure on the Borel subsets of B {\displaystyle {\mathbb {B} }} in the product topology .
de Finetti's Representation Theorem (unary) . In the unary case (where L {\displaystyle L} is a language containing q {\displaystyle q} unary predicates), the representation theorem for Ex is equivalent to:
Every probability function w {\displaystyle w} for L {\displaystyle L} satisfying Ex can be represented as
where D {\displaystyle {\mathbb {D} }} is the set of vectors x → = ⟨ x 1 , x 2 , … , x 2 q ⟩ {\displaystyle {\vec {x}}=\langle x_{1},x_{2},\ldots ,x_{2^{q}}\rangle } of non-negative real numbers summing to one and μ {\displaystyle \mu } is a σ {\displaystyle \sigma } -additive measure on D {\displaystyle {\mathbb {D} }} . | https://en.wikipedia.org/wiki/Pure_inductive_logic |
Pure mathematics is the study of mathematical concepts independently of any application outside mathematics . These concepts may originate in real-world concerns, and the results obtained may later turn out to be useful for practical applications, but pure mathematicians are not primarily motivated by such applications. Instead, the appeal is attributed to the intellectual challenge and aesthetic beauty of working out the logical consequences of basic principles.
While pure mathematics has existed as an activity since at least ancient Greece , the concept was elaborated upon around the year 1900, [ 2 ] after the introduction of theories with counter-intuitive properties (such as non-Euclidean geometries and Cantor's theory of infinite sets), and the discovery of apparent paradoxes (such as continuous functions that are nowhere differentiable , and Russell's paradox ). This introduced the need to renew the concept of mathematical rigor and rewrite all mathematics accordingly, with a systematic use of axiomatic methods . This led many mathematicians to focus on mathematics for its own sake, that is, pure mathematics.
Nevertheless, almost all mathematical theories remained motivated by problems coming from the real world or from less abstract mathematical theories. Also, many mathematical theories, which had seemed to be totally pure mathematics, were eventually used in applied areas, mainly physics and computer science . A famous early example is Isaac Newton 's demonstration that his law of universal gravitation implied that planets move in orbits that are conic sections , geometrical curves that had been studied in antiquity by Apollonius . Another example is the problem of factoring large integers , which is the basis of the RSA cryptosystem , widely used to secure internet communications. [ 3 ]
It follows that, currently, the distinction between pure and applied mathematics is more a philosophical point of view or a mathematician's preference rather than a rigid subdivision of mathematics. [ 4 ]
Ancient Greek mathematicians were among the earliest to make a distinction between pure and applied mathematics. Plato helped to create the gap between "arithmetic", now called number theory , and "logistic", now called arithmetic . Plato regarded logistic (arithmetic) as appropriate for businessmen and men of war who "must learn the art of numbers or [they] will not know how to array [their] troops" and arithmetic (number theory) as appropriate for philosophers "because [they have] to arise out of the sea of change and lay hold of true being." [ 5 ] Euclid of Alexandria , when asked by one of his students of what use was the study of geometry, asked his slave to give the student threepence, "since he must make gain of what he learns." [ 6 ] The Greek mathematician Apollonius of Perga was asked about the usefulness of some of his theorems in Book IV of Conics to which he proudly asserted, [ 7 ]
They are worthy of acceptance for the sake of the demonstrations themselves, in the same way as we accept many other things in mathematics for this and for no other reason.
And since many of his results were not applicable to the science or engineering of his day, Apollonius further argued in the preface of the fifth book of Conics that the subject is one of those that "...seem worthy of study for their own sake." [ 7 ]
The term itself is enshrined in the full title of the Sadleirian Chair , "Sadleirian Professor of Pure Mathematics", founded (as a professorship) in the mid-nineteenth century. The idea of a separate discipline of pure mathematics may have emerged at that time. The generation of Gauss made no sweeping distinction of the kind between pure and applied . In the following years, specialisation and professionalisation (particularly in the Weierstrass approach to mathematical analysis ) started to make a rift more apparent.
At the start of the twentieth century mathematicians took up the axiomatic method , strongly influenced by David Hilbert 's example. The logical formulation of pure mathematics suggested by Bertrand Russell in terms of a quantifier structure of propositions seemed more and more plausible, as large parts of mathematics became axiomatised and thus subject to the simple criteria of rigorous proof .
Pure mathematics, according to a view that can be ascribed to the Bourbaki group , is what is proved. "Pure mathematician" became a recognized vocation, achievable through training.
The case was made that pure mathematics is useful in engineering education : [ 8 ]
One central concept in pure mathematics is the idea of generality; pure mathematics often exhibits a trend towards increased generality. Uses and advantages of generality include the following:
Generality's impact on intuition is both dependent on the subject and a matter of personal preference or learning style. Often generality is seen as a hindrance to intuition, although it can certainly function as an aid to it, especially when it provides analogies to material for which one already has good intuition.
As a prime example of generality, the Erlangen program involved an expansion of geometry to accommodate non-Euclidean geometries as well as the field of topology , and other forms of geometry, by viewing geometry as the study of a space together with a group of transformations. The study of numbers , called algebra at the beginning undergraduate level, extends to abstract algebra at a more advanced level; and the study of functions , called calculus at the college freshman level becomes mathematical analysis and functional analysis at a more advanced level. Each of these branches of more abstract mathematics have many sub-specialties, and there are in fact many connections between pure mathematics and applied mathematics disciplines. A steep rise in abstraction was seen mid 20th century.
In practice, however, these developments led to a sharp divergence from physics , particularly from 1950 to 1983. Later this was criticised, for example by Vladimir Arnold , as too much Hilbert , not enough Poincaré . The point does not yet seem to be settled, in that string theory pulls one way, while discrete mathematics pulls back towards proof as central.
Mathematicians have always had differing opinions regarding the distinction between pure and applied mathematics. One of the most famous (but perhaps misunderstood) modern examples of this debate can be found in G.H. Hardy 's 1940 essay A Mathematician's Apology .
It is widely believed that Hardy considered applied mathematics to be ugly and dull. Although it is true that Hardy preferred pure mathematics, which he often compared to painting and poetry , Hardy saw the distinction between pure and applied mathematics to be simply that applied mathematics sought to express physical truth in a mathematical framework, whereas pure mathematics expressed truths that were independent of the physical world. Hardy made a separate distinction in mathematics between what he called "real" mathematics, "which has permanent aesthetic value", and "the dull and elementary parts of mathematics" that have practical use. [ 9 ]
Hardy considered some physicists, such as Einstein and Dirac , to be among the "real" mathematicians, but at the time that he was writing his Apology , he considered general relativity and quantum mechanics to be "useless", which allowed him to hold the opinion that only "dull" mathematics was useful. Moreover, Hardy briefly admitted that—just as the application of matrix theory and group theory to physics had come unexpectedly—the time may come where some kinds of beautiful, "real" mathematics may be useful as well.
Another insightful view is offered by American mathematician Andy Magid :
I've always thought that a good model here could be drawn from ring theory. In that subject, one has the subareas of commutative ring theory and non-commutative ring theory . An uninformed observer might think that these represent a dichotomy, but in fact the latter subsumes the former: a non-commutative ring is a not-necessarily-commutative ring. If we use similar conventions, then we could refer to applied mathematics and nonapplied mathematics, where by the latter we mean not-necessarily-applied mathematics ... [emphasis added] [ 10 ]
Friedrich Engels argued in his 1878 book Anti-Dühring that "it is not at all true that in pure mathematics the mind deals only with its own creations and imaginations. The concepts of number and figure have not been invented from any source other than the world of reality". [ 11 ] : 36 He further argued that "Before one came upon the idea of deducing the form of a cylinder from the rotation of a rectangle about one of its sides, a number of real rectangles and cylinders, however imperfect in form, must have been examined. Like all other sciences, mathematics arose out of the needs of men...But, as in every department of thought, at a certain stage of development the laws, which were abstracted from the real world, become divorced from the real world, and are set up against it as something independent, as laws coming from outside, to which the world has to conform." [ 11 ] : 37 | https://en.wikipedia.org/wiki/Pure_mathematics |
In the branches of mathematical logic known as proof theory and type theory , a pure type system ( PTS ), previously known as a generalized type system ( GTS ), is a form of typed lambda calculus that allows an arbitrary number of sorts and dependencies between any of these. The framework can be seen as a generalisation of Barendregt 's lambda cube , in the sense that all corners of the cube can be represented as instances of a PTS with just two sorts. [ 1 ] [ 2 ] In fact, Barendregt (1991) framed his cube in this setting. [ 3 ] Pure type systems may obscure the distinction between types and terms and collapse the type hierarchy , as is the case with the calculus of constructions , but this is not generally the case, e.g. the simply typed lambda calculus allows only terms to depend on terms.
Pure type systems were independently introduced by Stefano Berardi (1988) and Jan Terlouw (1989). [ 1 ] [ 2 ] Barendregt discussed them at length in his subsequent papers. [ 4 ] In his PhD thesis, [ 5 ] Berardi defined a cube of constructive logics akin to the lambda cube (these specifications are non-dependent). A modification of this cube was later called the L-cube by Herman Geuvers, who in his PhD thesis extended the Curry–Howard correspondence to this setting. [ 6 ] Based on these ideas, G. Barthe and others defined classical pure type systems (CPTS) by adding a double negation operator. [ 7 ] Similarly, in 1998, Tijn Borghuis introduced modal pure type systems (MPTS). [ 8 ] Roorda has discussed the application of pure type systems to functional programming ; and Roorda and Jeuring have proposed a programming language based on pure type systems. [ 9 ]
The systems from the lambda cube are all known to be strongly normalizing . Pure type systems in general need not be, for example System U from Girard's paradox is not. (Roughly speaking, Girard found pure systems in which one can express the sentence "the types form a type".) Furthermore, all known examples of pure type systems that are not strongly normalizing are not even (weakly) normalizing : they contain expressions that do not have normal forms , just like the untyped lambda calculus [ citation needed ] . It is a major open problem in the field whether this is always the case, i.e. whether a (weakly) normalizing PTS always has the strong normalization property. This is known as the Barendregt–Geuvers–Klop conjecture (named after Henk Barendregt , Herman Geuvers , and Jan Willem Klop ). [ 10 ]
A pure type system is defined by a triple ( S , A , R ) {\textstyle ({\mathcal {S}},{\mathcal {A}},{\mathcal {R}})} where S {\textstyle {\mathcal {S}}} is the set of sorts, A ⊆ S 2 {\textstyle {\mathcal {A}}\subseteq {\mathcal {S}}^{2}} is the set of axioms, and R ⊆ S 3 {\textstyle {\mathcal {R}}\subseteq {\mathcal {S}}^{3}} is the set of rules. Typing in pure type systems is determined by the following rules, where s {\textstyle s} is any sort: [ 4 ]
( s 1 , s 2 ) ∈ A ⊢ s 1 : s 2 (axiom) {\displaystyle {\frac {(s_{1},s_{2})\in {\mathcal {A}}}{\vdash s_{1}:s_{2}}}\quad {\text{(axiom)}}}
Γ ⊢ A : s x ∉ dom ( Γ ) Γ , x : A ⊢ x : A (start) {\displaystyle {\frac {\Gamma \vdash A:s\quad x\notin {\text{dom}}(\Gamma )}{\Gamma ,x:A\vdash x:A}}\quad {\text{(start)}}}
Γ ⊢ A : B Γ ⊢ C : s x ∉ dom ( Γ ) Γ , x : C ⊢ A : B (weakening) {\displaystyle {\frac {\Gamma \vdash A:B\quad \Gamma \vdash C:s\quad x\notin {\text{dom}}(\Gamma )}{\Gamma ,x:C\vdash A:B}}\quad {\text{(weakening)}}}
Γ ⊢ A : s 1 Γ , x : A ⊢ B : s 2 ( s 1 , s 2 , s 3 ) ∈ R Γ ⊢ Π x : A . B : s 3 (product) {\displaystyle {\frac {\Gamma \vdash A:s_{1}\quad \Gamma ,x:A\vdash B:s_{2}\quad (s_{1},s_{2},s_{3})\in {\mathcal {R}}}{\Gamma \vdash \Pi x:A.B:s_{3}}}\quad {\text{(product)}}}
Γ ⊢ C : Π x : A . B Γ ⊢ a : A Γ ⊢ C a : B [ x := a ] (application) {\displaystyle {\frac {\Gamma \vdash C:\Pi x:A.B\quad \Gamma \vdash a:A}{\Gamma \vdash Ca:B[x:=a]}}\quad {\text{(application)}}}
Γ , x : A ⊢ b : B Γ ⊢ Π x : A . B : s Γ ⊢ λ x : A . b : Π x : A . B (abstraction) {\displaystyle {\frac {\Gamma ,x:A\vdash b:B\quad \Gamma \vdash \Pi x:A.B:s}{\Gamma \vdash \lambda x:A.b:\Pi x:A.B}}\quad {\text{(abstraction)}}}
Γ ⊢ A : B B = β B ′ Γ ⊢ B ′ : s Γ ⊢ A : B ′ (conversion) {\displaystyle {\frac {\Gamma \vdash A:B\quad B=_{\beta }B'\quad \Gamma \vdash B':s}{\Gamma \vdash A:B'}}\quad {\text{(conversion)}}}
The following programming languages have pure type systems: [ citation needed ] | https://en.wikipedia.org/wiki/Pure_type_system |
In fire and explosion prevention engineering, purging refers to the introduction of an inert (i.e. non-combustible) purge gas into a closed system (e.g. a container or a process vessel) to prevent the formation of an ignitable atmosphere. Purging relies on the principle that a combustible (or flammable) gas is able to undergo combustion (explode) only if mixed with air in the right proportions. The flammability limits of the gas define those proportions, i.e. the ignitable range.
Assume a closed system (e.g. a container or process vessel), initially containing air, which shall be prepared for safe introduction of a flammable gas, for instance as part of a start-up procedure. The system can be flushed with an inert gas to reduce the concentration of oxygen so that when the flammable gas is admitted, an ignitable mixture cannot form. In NFPA 56, [ 1 ] this is known as purge-into-service . In combustion engineering terms, the admission of inert gas dilutes the oxygen below the limiting oxygen concentration .
Assume a closed system containing a flammable gas, which shall be prepared for safe ingress of air, for instance as part of a shut-down procedure. The system can be flushed with an inert gas to reduce the concentration of the flammable gas so that when air is introduced, an ignitable mixture cannot form. In NFPA 56 [ 1 ] this is known as purge-out-of-service .
It is useful with two terms for purging because purge-out-of-service requires much larger quantities of inert agent than purge-into-service. [ 2 ] The terminology of German standards [ 3 ] refers to purge-into-service as partial inerting , and purge-out-of-service as total inerting , [ 2 ] clearly indicating the difference between the two purging practices, although the choice of the term inerting , rather than purging , can be confusing, [ 2 ] see below.
Prevention of accidental fires and explosions can also be achieved by controlling sources of ignition. Purging with an inert gas provides a higher degree of safety however, because the practice ensures that an ignitable mixture never forms. Purging can therefore be said to rely on primary prevention, [ 4 ] reducing the possibility of an explosion, whereas control of sources of ignition relies on secondary prevention, [ 4 ] reducing the probability of an explosion. Primary prevention is also known as inherent safety . [ 4 ]
The purge gas is inert, i.e. by definition [ 1 ] non-combustible, or more precisely, non-reactive . The most common purge gases commercially available in large quantities are nitrogen and carbon dioxide . Other inert gases, e.g. argon or helium may be used. Nitrogen and carbon dioxide are unsuitable purge gases in some applications, as these gases may undergo chemical reaction with fine dusts of certain light metals.
Because an inert purge gas is used, the purge procedure may (erroneously) be referred to as inerting in everyday language. This confusion may lead to dangerous situations. Carbon dioxide is a safe inert gas for purging. Carbon dioxide is an unsafe inert gas for inerting, as it may ignite the vapors and result in an explosion. [ 2 ] | https://en.wikipedia.org/wiki/Purging_(gas) |
In game theory , the purification theorem was contributed by Nobel laureate John Harsanyi in 1973. [ 1 ] The theorem justifies a puzzling aspect of mixed strategy Nash equilibria : each player is wholly indifferent between each of the actions he puts non-zero weight on, yet he mixes them so as to make every other player also indifferent.
The purification theorem shows how such mixed strategy equilibria can emerge even if each players plays a pure strategy, so long as players have incomplete information about the payoffs of their opponents. Such strategies arise as the limit of a series of pure strategy equilibria for a disturbed game of incomplete information , in which the payoffs of each player are known to themselves but not their opponents. The idea is that the predicted mixed strategy of the original game emerges from the ever-improving approximations of a game which is not observed by the theorist who designed the original, idealized game.
The apparently mixed nature of the strategy is actually just the result of each player playing a pure strategy with threshold values that depend on the ex-ante distribution over the continuum of payoffs that a player can have. As that continuum shrinks to zero, the players' strategies converge to the predicted Nash equilibria of the original, unperturbed, complete information game.
The result is also an important aspect of modern-day inquiries in evolutionary game theory where the perturbed values are interpreted as distributions over types of players randomly paired in a population to play games.
Consider the Hawk–Dove game shown here. The game has two pure strategy equilibria (Defect, Cooperate) and (Cooperate, Defect). It also has a mixed equilibrium in which each player plays Cooperate with probability 2/3.
Suppose that each player i bears an extra cost a i from playing Cooperate, which is uniformly distributed on [− A , A ]. Players only know their own value of this cost. So this is a game of incomplete information which we can solve using Bayesian Nash equilibrium . The probability that a i ≤ a* is ( a* + A )/2 A . If player 2 Cooperates when a 2 ≤ a* , then player 1's expected utility from Cooperating is − a 1 + 3( a* + A )/2 A + 2(1 − ( a* + A )/2 A ) ; his expected utility from Defecting is 4( a* + A )/2 A . He should therefore himself Cooperate when a 1 ≤ 2 - 3( a* + A )/2 A . Seeking a symmetric equilibrium where both players cooperate if a i ≤ a* , we solve this for a* = 1/(2 + 3/ A ).
Now we have worked out a* , we can calculate the probability of each player playing Cooperate as
As A → 0, this approaches 2/3 – the same probability as in the mixed strategy in the complete information game.
Thus, we can think of the mixed strategy equilibrium as the outcome of pure strategies followed by players who have a small amount of private information about their payoffs.
Harsanyi's proof involves the strong assumption that the perturbations for each player are independent of the other players. However, further refinements to make the theorem more general have been attempted. [ 2 ] [ 3 ]
The main result of the theorem is that all the mixed strategy equilibria of a given game can be purified using the same sequence of perturbed games. However, in addition to independence of the perturbations, it relies on the set of payoffs for this sequence of games being of full measure. There are games, of a pathological nature, for which this condition fails to hold.
The main problem with these games falls into one of two categories: (1) various mixed strategies of the game are purified by different sequences of perturbed games and (2) some mixed strategies of the game involve weakly dominated strategies. No mixed strategy involving a weakly dominated strategy can be purified using this method because if there is ever any non-negative probability that the opponent will play a strategy for which the weakly dominated strategy is not a best response, then one will never wish to play the weakly dominated strategy. Hence, the limit fails to hold because it involves a discontinuity. [ 4 ] | https://en.wikipedia.org/wiki/Purification_theorem |
Purified water is water that has been mechanically filtered or processed to remove impurities and make it suitable for use. Distilled water was, formerly, the most common form of purified water, but, in recent years, water is more frequently purified by other processes including capacitive deionization , reverse osmosis , carbon filtering , microfiltration , ultrafiltration , ultraviolet oxidation , or electrodeionization . Combinations of a number of these processes have come into use to produce ultrapure water of such high purity that its trace contaminants are measured in parts per billion (ppb) or parts per trillion (ppt).
Purified water has many uses, largely in the production of medications, in science and engineering laboratories and industries, and is produced in a range of purities. It is also used in the commercial beverage industry as the primary ingredient of any given trademarked bottling formula, in order to maintain product consistency. It can be produced on-site for immediate use or purchased in containers. Purified water in colloquial English can also refer to water that has been treated ("rendered potable") to neutralize, but not necessarily remove contaminants considered harmful to humans or animals.
Purified water is usually produced by the purification of drinking water or ground water . The impurities that may need to be removed are:
Distilled water is produced by a process of distillation . [ 1 ] Distillation involves boiling the water and then condensing the vapor into a clean container, leaving solid contaminants behind. Distillation produces very pure water. [ 2 ] A white or yellowish mineral scale is left in the distillation apparatus, which requires regular cleaning. Distilled water, like all purified water, must be stored in a sterilized container to guarantee the absence of bacteria. For many procedures, more economical alternatives are available, such as deionized water, and are used in place of distilled water.
Double-distilled water (abbreviated "ddH 2 O", "Bidest. water" or "DDW") is prepared by slow boiling the uncontaminated condensed water vapor from a prior slow boiling. Historically, it was the de facto standard for highly purified laboratory water for biochemistry and used in laboratory trace analysis until combination purification methods of water purification became widespread. [ citation needed ]
Deionized water ( DI water , DIW or de-ionized water ), often synonymous with demineralized water / DM water , [ 4 ] is water that has had almost all of its mineral ions removed, such as cations like sodium , calcium , iron , and copper , and anions such as chloride and sulfate . Deionization is a chemical process that uses specially manufactured ion-exchange resins , which exchange hydrogen and hydroxide ions for dissolved minerals, and then recombine to form water. Because most non-particulate water impurities are dissolved salts, deionization produces highly pure water that is generally similar to distilled water, with the advantage that the process is quicker and does not build up scale.
However, deionization does not significantly remove uncharged organic molecules, viruses, or bacteria, except by incidental trapping in the resin. Specially made strong base anion resins can remove Gram-negative bacteria. Deionization can be done continuously and inexpensively using electrodeionization .
Three types of deionization exist: co-current, counter-current, and mixed bed.
Co-current deionization refers to the original downflow process where both input water and regeneration chemicals enter at the top of an ion-exchange column and exit at the bottom. Co-current operating costs are comparatively higher than counter-current deionization because of the additional usage of regenerants. Because regenerant chemicals are dilute when they encounter the bottom or finishing resins in an ion-exchange column, the product quality is lower than a similarly sized counter-flow column.
The process is still used, and can be maximized with the fine-tuning of the flow of regenerants within the ion exchange column.
Counter-current deionization comes in two forms, each requiring engineered internals:
In both cases, separate distribution headers (input water, input regenerant, exit water, and exit regenerant) must be tuned to: the input water quality and flow, the time of operation between regenerations, and the desired product water analysis.
Counter-current deionization is the more attractive method of ion exchange. Chemicals (regenerants) flow in the opposite direction to the service flow. Less time for regeneration is required when compared to cocurrent columns. The quality of the finished product can be as low as .5 parts per million. The main advantage of counter-current deionization is the low operating cost, due to the low usage of regenerants during the regeneration process.
Mixed bed deionization is a 40/60 mixture of cation and anion resin combined in a single ion-exchange column. With proper pretreatment, product water purified from a single pass through a mixed bed ion exchange column is the purest that can be made. Most commonly, mixed bed demineralizers are used for final water polishing to clean the last few ions within water prior to use. Small mixed bed deionization units have no regeneration capability. Commercial mixed bed deionization units have elaborate internal water and regenerant distribution systems for regeneration. A control system operates pumps and valves for the regenerants of spent anions and cations resins within the ion exchange column. Each is regenerated separately, then remixed during the regeneration process. Because of the high quality of product water achieved, and because of the expense and difficulty of regeneration, mixed bed demineralizers are used only when the highest purity water is required.
Softening consists in preventing the possible precipitation of poorly soluble minerals from natural water due to changes occurring in the physico-chemical conditions (such as pCO 2 , pH , and E h ). It is applied when poorly soluble ions present in water might precipitate as insoluble salts (e.g., CaCO 3 , CaSO 4 ...), or interact with a chemical process. The water is "softened" by exchanging poorly soluble divalent cations (mainly Ca 2+ , Mg 2+ and Fe 2+ ) with the soluble Na + cation. Softened water has therefore a higher electrical conductivity than deionized water. Softened water cannot be considered as truly demineralized water, but does no longer contain cations responsible for the hardness of water and causing the formation of limescale , a hard chalky deposit essentially consisting of CaCO 3 , building up inside kettles , hot water boilers , and pipework .
In the strict sense, the term demineralization should imply removing all dissolved mineral species from water. Thus not only removing dissolved salt as obtained by simple deionization, but also neutral dissolved species such as dissolved iron hydroxides ( Fe(OH) 3 ) or dissolved silica ( Si(OH) 4 ), two solutes often present in water. In this way, demineralized water has the same electrical conductivity as deionized water, but is purer because it does not contain non-ionized substances, i.e. neutral solutes. However, demineralized water is often used interchangeably with deionized water and can be also confused with softened water, depending on the exact definition used: removing only the cations susceptible to precipitate as insoluble minerals (from there, "demineralization"), or removing all the "mineral species" present in water, and thus not only dissolved ions but also neutral solute species. So, the term demineralized water is vague and deionized water or softened water should often be preferred in its place for more clarity.
Other processes are also used to purify water, including reverse osmosis , carbon filtration , microporous filtration, ultrafiltration , ultraviolet oxidation, or electrodialysis . These are used in place of, or in addition to, the processes listed above. Processes rendering water potable but not necessarily closer to being pure H 2 O / hydroxide + hydronium ions include the use of dilute sodium hypochlorite , ozone , mixed-oxidants (electro-catalyzed H 2 O + NaCl), and iodine ; See discussion regarding potable water treatments under "Health effects" below.
Purified water is suitable for many applications, including autoclaves, hand-pieces, laboratory testing, laser cutting, and automotive use. Purification removes contaminants that may interfere with processes, or leave residues on evaporation. Although water is generally considered to be a good electrical conductor—for example, domestic electrical systems are considered particularly hazardous to people if they may be in contact with wet surfaces— pure water is a poor conductor. The conductivity of water is measured in Siemens per meter (S/m). Sea-water is typically 5 S/m, [ 5 ] drinking water is typically in the range of 5-50 mS/m, while highly purified water can be as low as 5.5 μS/m (0.055 μS/cm), a ratio of about 1,000,000:1,000:1.
Purified water is used in the pharmaceutical industry. Water of this grade is widely used as a raw material, ingredient, and solvent in the processing, formulation, and manufacture of pharmaceutical products, active pharmaceutical ingredients (APIs) and intermediates, compendial articles, and analytical reagents. The microbiological content of the water is of importance and the water must be regularly monitored and tested to show that it remains within microbiological control. [ 6 ]
Purified water is also used in the commercial beverage industry as the primary ingredient of any given trademarked bottling formula, in order to maintain critical consistency of taste, clarity, and color. This guarantees the consumer reliably safe and satisfying drinking. In the process prior to filling and sealing, individual bottles are always rinsed with deionised water to remove any particles that could cause a change in taste.
Deionised and distilled water are used in lead–acid batteries to prevent erosion of the cells, although deionised water is the better choice as more impurities are removed from the water in the creation process. [ 7 ]
Technical standards on water quality have been established by a number of professional organizations, including the American Chemical Society (ACS), ASTM International , the U.S. National Committee for Clinical Laboratory Standards (NCCLS) which is now CLSI , and the U.S. Pharmacopeia (USP) . The ASTM, NCCLS, and ISO 3696 or the International Organization for Standardization classify purified water into Grade 1–3 or Types I–IV depending on the level of purity. These organizations have similar, although not identical, parameters for highly purified water.
Note that the European Pharmacopeia uses Highly Purified Water (HPW) as a definition for water meeting the quality of Water For Injection, without however having undergone distillation. In the laboratory context, highly purified water is used to denominate various qualities of water having been "highly" purified.
Regardless of which organization's water quality norm is used, even Type I water may require further purification depending on the specific laboratory application. For example, water that is being used for molecular-biology experiments needs to be DNase or RNase -free, which requires special additional treatment or functional testing. Water for microbiology experiments needs to be completely sterile, which is usually accomplished by autoclaving. Water used to analyze trace metals may require the elimination of trace metals to a standard beyond that of the Type I water norm.
A member of the ASTM D19 (Water) Committee, Erich L. Gibbs, criticized ASTM Standard D1193, by saying "Type I water could be almost anything – water that meets some or all of the limits, part or all of the time, at the same or different points in the production process." [ 9 ]
Completely de-gassed ultrapure water has a conductivity of 1.2 × 10 −4 S/m, whereas on equilibration to the atmosphere it is 7.5 × 10 −5 S/m due to dissolved CO 2 in it. [ 10 ] The highest grades of ultrapure water should not be stored in glass or plastic containers because these container materials leach (release) contaminants at very low concentrations. Storage vessels made of silica are used for less-demanding applications and vessels of ultrapure tin are used for the highest-purity applications. It is worth noting that, although electrical conductivity only indicates the presence of ions, the majority of common contaminants found naturally in water ionize to some degree. This ionization is a good measure of the efficacy of a filtration system, and more expensive systems incorporate conductivity-based alarms to indicate when filters should be refreshed or replaced. For comparison, [ 11 ] seawater has a conductivity of perhaps 5 S/m (53 mS/cm is quoted), while normal un-purified tap water may have conductivity of 5 × 10 −3 S/m (50 μS/cm) (to within an order of magnitude), which is still about 2 or 3 orders of magnitude higher than the output from a well-functioning demineralizing or distillation mechanism, so low levels of contamination or declining performance are easily detected. [ citation needed ]
Some industrial processes, notably in the semiconductor and pharmaceutical industries, need large amounts of very pure water. In these situations, feedwater is first processed into purified water and then further processed to produce ultrapure water .
Another class of ultrapure water used for pharmaceutical industries is called Water-For-Inject (WFI), typically generated by multiple distillation or compressed-vaporation [ check spelling ] process of DI water or RO-DI water. It has a tighter bacteria requirement as 10 CFU per 100 mL, instead of the 100 CFU per mL per USP.
Distilled or deionized water is commonly used to top up the lead–acid batteries used in cars and trucks and for other applications. The presence of foreign ions commonly found in tap water will drastically shorten the lifespan of a lead–acid battery.
Distilled or deionized water is preferable to tap water for use in automotive cooling systems.
Using deionised or distilled water in appliances that evaporate water, such as steam irons and humidifiers, can reduce the build-up of mineral scale , which shortens appliance life. Some appliance manufacturers say that deionised water is no longer necessary. [ 12 ] [ 13 ]
Purified water is used in freshwater and marine aquariums . Since it does not contain impurities such as copper and chlorine, it helps to keep fish free from diseases and avoids the build-up of algae on aquarium plants due to its lack of phosphate and silicate. Deionized water should be re-mineralized before use in aquaria since it lacks many macro- and micro-nutrients needed by plants and fish.
Water (sometimes mixed with methanol ) has been used to extend the performance of aircraft engines. In piston engines, it acts to delay the onset of engine knocking . In turbine engines, it allows more fuel flow for a given turbine temperature limit and increases mass flow. As an example, it was used on early Boeing 707 models. [ 14 ] Advanced materials and engineering have since rendered such systems obsolete for new designs; however, spray-cooling of incoming air-charge is still used to a limited extent with off-road turbo-charged engines (road-race track cars).
Deionized water is very often used as an ingredient in many cosmetics and pharmaceuticals. "Aqua" is the standard name for water in the International Nomenclature of Cosmetic Ingredients standard, which is mandatory on product labels in some countries.
Because of its high relative dielectric constant (~80), deionized water is also used (for short durations, when the resistive losses are acceptable) as a high voltage dielectric in many pulsed power applications, such as the Sandia National Laboratories Z Machine .
Distilled water can be used in PC water-cooling systems and Laser Marking Systems. The lack of impurity in the water means that the system stays clean and prevents a buildup of bacteria and algae. Also, the low conductance reduces the risk of electrical damage in the event of a leak. However, deionized water has been known to cause cracks in brass and copper fittings. [ citation needed ]
When used as a rinse after washing cars, windows, and similar applications, purified water dries without leaving spots caused by dissolved solutes.
Deionized water is used in water-fog fire-extinguishing systems used in sensitive environments, such as where high-voltage electrical and sensitive electronic equipment is used. The 'sprinkler' nozzles use much finer spray jets than other systems and operate at up 35 MPa (350 bar; 5,000 psi) of pressure. The extremely fine mist produced takes the heat out of fire rapidly, and the fine droplets of water are nonconducting (when deionized) and are less likely to damage sensitive equipment. Deionized water, however, is inherently acidic, and contaminants (such as copper, dust, stainless and carbon steel, and many other common materials) rapidly supply ions, thus re-ionizing the water. It is not generally considered acceptable to spray water on electrical circuits that are powered, and it is generally considered undesirable to use water in electrical contexts. [ 15 ] [ 16 ] [ 17 ]
Distilled or purified water is used in humidors to prevent cigars from collecting bacteria , mold , and contaminants, as well as to prevent residue from forming on the humidifier material.
Window cleaners using water-fed pole systems also use purified water because it enables the windows to dry by themselves leaving no stains or smears. The use of purified water from water-fed poles also prevents the need for using ladders and therefore ensure compliance with Work at Height Legislation in the UK.
Distillation removes all minerals from water, and the membrane methods of reverse osmosis and nanofiltration remove most, or virtually all, minerals. This results in demineralized water, which has not been proven to be healthier than drinking water . The World Health Organization investigated the health effects of demineralized water in 1980, and found that demineralized water increased diuresis and the elimination of electrolytes , with decreased serum potassium concentration. Magnesium, calcium and other nutrients in water may help to protect against nutritional deficiency. Recommendations for magnesium have been put at a minimum of 10 mg/L with 20–30 mg/L optimum; for calcium a 20 mg/L minimum and a 40–80 mg/L optimum, and a total water hardness (adding magnesium and calcium) of 2–4 mmol/L . For fluoride, the concentration recommended for dental health is 0.5–1.0 mg/L, with a maximum guideline value of 1.5 mg/L to avoid dental fluorosis . [ 18 ]
Municipal water supplies often add or have trace impurities at levels that are regulated to be safe for consumption. Much of these additional impurities, such as volatile organic compounds , fluoride, and an estimated 75,000+ other chemical compounds [ 19 ] [ 20 ] [ 21 ] are not removed through conventional filtration; however, distillation and reverse osmosis eliminate nearly all of these impurities. | https://en.wikipedia.org/wiki/Purified_water |
Purine analogues are antimetabolites that mimic the structure of metabolic purines .
Purine antimetabolites are commonly used to treat cancer by interfering with DNA replication. [ 1 ]
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Purine_analogue |
Purine metabolism refers to the metabolic pathways to synthesize and break down purines that are present in many organisms.
Purines are biologically synthesized as nucleotides and in particular as ribotides, i.e. bases attached to ribose 5-phosphate . Both adenine and guanine are derived from the nucleotide inosine monophosphate (IMP), which is the first compound in the pathway to have a completely formed purine ring system.
Inosine monophosphate is synthesized on a pre-existing ribose-phosphate through a complex pathway (as shown in the figure on the right). The source of the carbon and nitrogen atoms of the purine ring, 5 and 4 respectively, come from multiple sources. The amino acid glycine contributes all its carbon (2) and nitrogen (1) atoms, with additional nitrogen atoms from glutamine (2) and aspartic acid (1), and additional carbon atoms from formyl groups (2), which are transferred from the coenzyme tetrahydrofolate as 10-formyltetrahydrofolate , and a carbon atom from bicarbonate (1). Formyl groups build carbon-2 and carbon-8 in the purine ring system, which are the ones acting as bridges between two nitrogen atoms.
A key regulatory step is the production of 5-phospho-α- D -ribosyl 1-pyrophosphate ( PRPP ) by ribose-phosphate diphosphokinase , which is activated by inorganic phosphate and inactivated by purine ribonucleotides. It is not the committed step to purine synthesis because PRPP is also used in pyrimidine synthesis and salvage pathways.
The first committed step is the reaction of PRPP, glutamine and water to 5'-phosphoribosylamine (PRA), glutamate , and pyrophosphate - catalyzed by amidophosphoribosyltransferase , which is activated by PRPP and inhibited by AMP , GMP and IMP .
In the second step react PRA , glycine and ATP to create GAR , ADP, and pyrophosphate - catalyzed by phosphoribosylamine—glycine ligase (GAR synthetase). Due to the chemical lability of PRA, which has a half-life of 38 seconds at PH 7.5 and 37 °C, researchers have suggested that the compound is channeled from amidophosphoribosyltransferase to GAR synthetase in vivo. [ 1 ]
The third is catalyzed by phosphoribosylglycinamide formyltransferase .
The fourth is catalyzed by phosphoribosylformylglycinamidine synthase .
The fifth is catalyzed by AIR synthetase (FGAM cyclase) .
The sixth is catalyzed by phosphoribosylaminoimidazole carboxylase .
The seventh is catalyzed by phosphoribosylaminoimidazolesuccinocarboxamide synthase .
The eight is catalyzed by adenylosuccinate lyase .
The products AICAR and fumarate move on to two different pathways. AICAR serves as the reactant for the ninth step, while fumarate is transported to the citric acid cycle which can then skip the carbon dioxide evolution steps to produce malate. The conversion of fumarate to malate is catalyzed by fumarase. In this way, fumarate connects purine synthesis to the citric acid cycle. [ 2 ]
The ninth is catalyzed by phosphoribosylaminoimidazolecarboxamide formyltransferase .
The last step is catalyzed by Inosine monophosphate synthase .
In eukaryotes the second, third, and fifth step are catalyzed by trifunctional purine biosynthetic protein adenosine-3 , which is encoded by the GART gene.
Both ninth and tenth step are accomplished by a single protein named Bifunctional purine biosynthesis protein PURH, encoded by the ATIC gene.
Purines are metabolised by several enzymes :
The formation of 5'-phosphoribosylamine from glutamine and PRPP catalysed by PRPP amino transferase is the regulation point for purine synthesis. The enzyme is an allosteric enzyme, so it can be converted from IMP, GMP and AMP in high concentration binds the enzyme to exerts inhibition while PRPP is in large amount binds to the enzyme which causes activation. So IMP, GMP and AMP are inhibitors while PRPP is an activator. Between the formation of 5'-phosphoribosyl, aminoimidazole and IMP, there is no known regulation step.
Purines from turnover of cellular nucleic acids (or from food) can also be salvaged and reused in new nucleotides.
When a defective gene causes gaps to appear in the metabolic recycling process for purines and pyrimidines, these chemicals are not metabolised properly, and adults or children can suffer from any one of twenty-eight hereditary disorders, possibly some more as yet unknown. Symptoms can include gout , anaemia, epilepsy, delayed development, deafness, compulsive self-biting, kidney failure or stones, or loss of immunity.
Purine metabolism can have imbalances that can arise from harmful nucleotide triphosphates incorporating into DNA and RNA which further lead to genetic disturbances and mutations, and as a result, give rise to several types of diseases. Some of the diseases are:
Modulation of purine metabolism has pharmacotherapeutic value.
Purine synthesis inhibitors inhibit the proliferation of cells, especially leukocytes . These inhibitors include azathioprine , an immunosuppressant used in organ transplantation , autoimmune disease such as rheumatoid arthritis or inflammatory bowel disease such as Crohn's disease and ulcerative colitis .
Mycophenolate mofetil is an immunosuppressant drug used to prevent rejection in organ transplantation; it inhibits purine synthesis by blocking inosine monophosphate dehydrogenase (IMPDH). [ 5 ] Methotrexate also indirectly inhibits purine synthesis by blocking the metabolism of folic acid (it is an inhibitor of the dihydrofolate reductase ).
Allopurinol is a drug that inhibits the enzyme xanthine oxidoreductase and, thus, lowers the level of uric acid in the body. This may be useful in the treatment of gout, which is a disease caused by excess uric acid, forming crystals in joints.
In order to understand how life arose, knowledge is required of the chemical pathways that permit formation of the key building blocks of life under plausible prebiotic conditions . Nam et al. [ 6 ] demonstrated the direct condensation of purine and pyrimidine nucleobases with ribose to give ribonucleosides in aqueous microdroplets, a key step leading to RNA formation. Also, a plausible prebiotic process for synthesizing purine ribonucleosides was presented by Becker et al. [ 7 ]
Organisms in all three domains of life, eukaryotes , bacteria and archaea , are able to carry out de novo biosynthesis of purines. This ability reflects the essentiality of purines for life. The biochemical pathway of synthesis is very similar in eukaryotes and bacterial species, but is more variable among archaeal species. [ 8 ] A nearly complete, or complete, set of genes required for purine biosynthesis was determined to be present in 58 of the 65 archaeal species studied. [ 8 ] However, also identified were seven archaeal species with entirely, or nearly entirely, absent purine encoding genes. Apparently the archaeal species unable to synthesize purines are able to acquire exogenous purines for growth., [ 8 ] and are thus similar to purine mutants of eukaryotes, e.g. purine mutants of the Ascomycete fungus Neurospora crassa , [ 9 ] that also require exogenous purines for growth. | https://en.wikipedia.org/wiki/Purine_metabolism |
The Purine Nucleotide Cycle is a metabolic pathway in protein metabolism requiring the amino acids aspartate and glutamate . The cycle is used to regulate the levels of adenine nucleotides , in which ammonia and fumarate are generated. [ 2 ] AMP converts into IMP and the byproduct ammonia. IMP converts to S-AMP ( adenylosuccinate ), which then converts to AMP and the byproduct fumarate. The fumarate goes on to produce ATP (energy) via oxidative phosphorylation as it enters the Krebs cycle and then the electron transport chain . Lowenstein first described this pathway and outlined its importance in processes including amino acid catabolism and regulation of flux through glycolysis and the Krebs cycle . [ 2 ] [ 3 ] [ 4 ]
AMP is produced after strenuous muscle contraction when the ATP reservoir is low (ADP > ATP) by the adenylate kinase (myokinase) reaction. [ 5 ] [ 6 ] AMP is also produced from adenine and adenosine directly; however, AMP can be produced through less direct metabolic pathways, such as de novo synthesis of IMP or through salvage pathways of guanine (a purine) and any of the purine nucleotides and nucleosides . IMP is synthesized de novo from glucose through the pentose phosphate pathway which produces ribose 5-P , which then converts to PRPP that with the amino acids glycine, glutamine, and aspartate ( see Purine metabolism ) can be further converted into IMP. [ 7 ]
The cycle comprises three enzyme-catalysed reactions. The first stage is the deamination of the purine nucleotide adenosine monophosphate (AMP) to form inosine monophosphate (IMP), catalysed by the enzyme AMP deaminase :
The second stage is the formation of adenylosuccinate from IMP and the amino acid aspartate , which is coupled to the energetically favourable hydrolysis of GTP , and catalysed by the enzyme adenylosuccinate synthetase :
Finally, adenylosuccinate is cleaved by the enzyme adenylosuccinate lyase to release fumarate and regenerate the starting material of AMP:
A recent study showed that activation of HIF-1α allows cardiomyocytes to sustain mitochondrial membrane potential during anoxic stress by utilizing fumarate produced by adenylosuccinate lyase as an alternate terminal electron acceptor in place of oxygen. This mechanism should help provide protection in the ischemic heart. [ 8 ]
The purine nucleotide cycle occurs in the cytosol (intracellular fluid) of the sarcoplasm of skeletal muscle , and in the myocyte 's cytosolic compartment of the cytoplasm of cardiac and smooth muscle . The cycle occurs when ATP reservoirs run low (ADP > ATP), such as strenuous exercise, fasting or starvation. [ 5 ] [ 9 ]
Proteins catabolize into amino acids, and amino acids are precursors for purines, nucleotides and nucleosides which are used in the purine nucleotide cycle. [ 7 ] The amino acid glutamate is used to neutralize the ammonia produced when AMP is converted into IMP. Another amino acid, aspartate , is used along with IMP to produce S-AMP in the cycle. Skeletal muscle contains amino acids for use in catabolism, known as the free amino acid pool; however, inadequate carbohydrate supply and/or strenuous exercise requires protein catabolism to sustain the free amino acids. [ 9 ]
When the phosphagen system (ATP-PCr) has been depleted of phosphocreatine (creatine phosphate), the purine nucleotide cycle also helps to sustain the myokinase reaction by reducing accumulation of AMP produced after muscle contraction in the below reaction. [ 6 ]
During muscle contraction:
Muscle at rest:
AMP can dephosphorylate to adenosine and diffuse out of the cell; the purine nucleotide cycle may therefore also reduce the loss of adenosine from the cell since nucleosides permeate cell membranes, whereas nucleotides do not. [ 6 ]
Fumarate , produced from the purine nucleotide cycle, is an intermediate of TCA cycle and enters the mitochondria by converting into malate and utilizing the malate shuttle where it is converted into oxaloacetic acid (OAA). During exercise, OAA either enters into TCA cycle or converts into aspartate in the mitochondria. [ 10 ]
As the purine nucleotide cycle produces ammonia (see below in ammonia synthesis) , skeletal muscle needs to synthesize glutamate in a way that does not further increase ammonia, and as such the use of glutaminase to produce glutamate from glutamine would not be ideal. Also, plasma glutamine (released from the kidneys) requires active transport into the muscle cell (consuming ATP). [ 11 ] Consequently, during exercise when the ATP reservoir is low (ADP>ATP), glutamate is produced from branch-chained amino acids (BCAAs) and α-ketoglutarate, as well as from alanine and α-ketoglutarate. [ 12 ] Glutamate is then used to produce aspartate. The aspartate enters the purine nucleotide cycle, where it is used to convert IMP into S-AMP. [ 10 ] [ 13 ]
When skeletal muscle is at rest (ADP<ATP), the aspartate is no longer needed for the purine nucleotide cycle and can therefore be used with α-ketoglutarate to produce glutamate and oxaloacetic acid (the above reaction reversed).
α-Ketoglutarate + Aspartate ⇌ Oxaloacetic acid + Glutamate (catalyzed by aspartate aminotransferase )
During exercise when the ATP reservoir is low (ADP>ATP), the purine nucleotide cycle produces ammonia ( NH 3 ) when it converts AMP into IMP. (With the exception of AMP deaminase deficiency , where ammonia is produced during exercise when adenosine, from AMP, is converted into inosine). During rest (ADP<ATP), ammonia is produced from the conversion of adenosine into inosine by adenosine deaminase.
Ammonia is toxic, disrupts cell function, and permeates cell membranes. Ammonia becomes ammonium ( NH + 4 ) depending on the pH of the cell or plasma. Ammonium is relatively non-toxic and does not readily permeate cell membranes. [ 14 ]
NH 3 + H + ⇌ NH + 4
Ammonia ( NH 3 ) diffuses into the blood, circulating to the liver to be neutralized by the urea cycle . (N.b. urea is not the same as uric acid , though both are end products of the purine nucleotide cycle, from ammonia and nucleotides respectively.) When the skeletal muscles are at rest (ADP<ATP), ammonia ( NH 3 ) combines with glutamate to produce glutamine , which is an energy-consuming step, and the glutamine enters the blood. [ 15 ] [ 11 ]
Glutamate + NH 3 + ATP → Glutamine + ADP + P i (catalyzed by glutamine synthetase in resting skeletal muscle)
Excess glutamine is used by proximal tubule in the kidneys for ammoniagenesis, which may counteract any metabolic acidosis from anaerobic skeletal muscle activity. [ 15 ] In kidneys, glutamine is deaminated twice to form glutamate and then α-ketoglutarate . These NH 3 molecules neutralise the organic acids ( lactic acid and ketone bodies ) produced in the muscles.
Glutamine + H 2 O → Glutamate + NH + 4 (catalyzed by glutaminase in the kidneys)
Some metabolic myopathies involve the under- or over-utilization of the purine nucleotide cycle. Metabolic myopathies cause a low ATP reservoir in muscle cells (ADP > ATP), resulting in exercise-induced excessive AMP buildup in muscle, and subsequent exercise-induced hyperuricemia (myogenic hyperuricemia) through conversion of excessive AMP into uric acid by way of either AMP → adenosine or AMP → IMP.
During strenuous exercise, AMP is created through the use of the adenylate kinase (myokinase) reaction after the phosphagen system has been depleted of creatine phosphate and not enough ATP is being produced yet by other pathways (see above reaction in ' Occurrence ' section) . In those affected by metabolic myopathies, exercise that normally wouldn't be considered strenuous for healthy people, is however strenuous for them due to their low ATP reservoir in muscle cells. This results in regular use of the myokinase reaction for normal, everyday activities.
Besides the myokinase reaction, a high ATP consumption and low ATP reservoir also increases protein catabolism and salvage of IMP, which results in increased AMP and IMP. These two nucleotides can then enter the purine nucleotide cycle to produce fumarate which will then produce ATP by oxidative phosphorylation. If the purine nucleotide cycle is blocked (such as AMP deaminase deficiency) or if exercise is stopped and increased fumarate production is no longer needed, then the excess nucleotides will be converted into uric acid.
AMP deaminase deficiency (formally known as myoadenylate deaminase deficiency or MADD) is a metabolic myopathy which results in excessive AMP buildup brought on by exercise. AMP deaminase is needed to convert AMP into IMP in the purine nucleotide cycle. Without this enzyme, the excessive AMP buildup is initially due to the adenylate kinase (myokinase) reaction which occurs after a muscle contraction. [ 16 ] However, AMP is also used to allosterically regulate the enzyme myophosphorylase ( see Glycogen phosphorylase § Regulation ), so the initial buildup of AMP triggers the enzyme myophosphorylase to release muscle glycogen into glucose-1-P (glycogen→glucose-1-P), [ 17 ] which eventually depletes the muscle glycogen, which in turn triggers protein metabolism, which then produces even more AMP. In AMP deaminase deficiency, excess adenosine is converted into uric acid in the following reaction:
Myogenic hyperuricemia , as a result of the purine nucleotide cycle running when ATP reservoirs in muscle cells are low (ADP > ATP), is a common pathophysiologic feature of glycogenoses such as GSD-III , GSD-V and GSD-VII , as they are metabolic myopathies which impair the ability of ATP (energy) production within muscle cells. In these metabolic myopathies, myogenic hyperuricemia is exercise-induced; inosine, hypoxanthine and uric acid increase in plasma after exercise and decrease over hours with rest. [ 18 ] Excess AMP (adenosine monophosphate) is converted into uric acid . [ 18 ]
Hyperammonemia is also seen post-exercise in McArdle disease (GSD-V) and phosphoglucomutase deficiency (PGM1-CDG, formerly GSD-XIV), due to the purine nucleotide cycle running when the ATP reservoir is low due to the glycolytic block. [ 19 ] [ 20 ] [ 21 ] [ 22 ] [ 23 ] [ 24 ]
AMP + H 2 O + H + → IMP + NH 3 | https://en.wikipedia.org/wiki/Purine_nucleotide_cycle |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.