text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
The Protomap is a primordial molecular map of the functional areas of the mammalian cerebral cortex during early embryonic development , at a stage when neural stem cells are still the dominant cell type. [ 1 ] The protomap is a feature of the ventricular zone , which contains the principal cortical progenitor cells, known as radial glial cells . [ 2 ] [ 3 ] Through a process called ' cortical patterning ', the protomap is patterned by a system of signaling centers in the embryo, which provide positional information and cell fate instructions. [ 4 ] [ 5 ] [ 6 ] These early genetic instructions set in motion a development and maturation process that gives rise to the mature functional areas of the cortex, for example the visual, somatosensory, and motor areas.
The term protomap was coined by Pasko Rakic . [ 1 ] The protomap hypothesis was opposed by the protocortex hypothesis , which proposes that cortical proto-areas initially have the same potential, [ 7 ] [ 8 ] and that regionalization in large part is controlled by external influences, such as axonal inputs from the thalamus to the cortex. [ 9 ] However, a series of papers in the year 2000 and in 2001 provided strong evidence against the protocortex hypothesis, and the protomap hypothesis has been well accepted since then. [ 5 ] [ 10 ] [ 11 ] The protomap hypothesis, together with the related radial unit hypothesis , forms our core understanding of the embryonic development of the cerebral cortex. Once the basic structure is present and cortical neurons have migrated to their final destinations, many other processes contribute to the maturation of functional cortical circuits. [ 12 ]
This neuroscience article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Protomap_(neuroscience) |
In structural biology , a protomer is the structural unit of an oligomeric protein . It is the smallest unit composed of at least one protein chain. The protomers associate to form a larger oligomer of two or more copies of this unit. Protomers usually arrange in cyclic symmetry to form closed point group symmetries .
The term was introduced by Chetverin [ 1 ] to make nomenclature in the Na/K-ATPase enzyme unambiguous. This enzyme is composed of two subunits: a large, catalytic α subunit, and a smaller glycoprotein β subunit (plus a proteolipid , called γ-subunit). At the time it was unclear how many of each work together. In addition, when people spoke of a dimer , it was unclear whether they were referring to αβ or to (αβ) 2 . Chetverin suggested to call αβ a protomer and (αβ) 2 a diprotomer. Thus, in the work by Chetverin the term protomer was only applied to a hetero-oligomer and subsequently used mainly in the context of hetero-oligomers. Following this usage, a protomer consists of a least two different proteins chains. In current literature of structural biology, the term is commonly also applied to the smallest unit of homo-oligomers , avoiding the term " monomer ".
In chemistry , a so-called protomer is a molecule which displays tautomerism due to position of a proton. [ 2 ] [ 3 ]
Hemoglobin is a heterotetramer consisting of four subunits (two α and two β). However, structurally and functionally hemoglobin is described better as (αβ) 2 , so we call it a dimer of two αβ-protomers, that is, a diprotomer. [ 4 ]
Aspartate carbamoyltransferase has a α 6 β 6 subunit composition. The six αβ-protomers are arranged in D 3 symmetry.
Viral capsids are usually composed of protomers.
HIV-1 protease forms a homodimer consisting of two protomers.
Examples in chemistry include tyrosine and 4-aminobenzoic acid . The former may be deprotonated to form the carboxylate and phenoxide anions, [ 5 ] and the later may be protonated at the amino or carboxyl groups. [ 6 ]
This enzyme -related article is a stub . You can help Wikipedia by expanding it .
This biophysics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Protomer |
A Proton-coupled electron transfer (PCET) is a chemical reaction that involves the transfer of electrons and protons from one atom to another. The term was originally coined for single proton, single electron processes that are concerted, [ 1 ] but the definition has relaxed to include many related processes. Reactions that involve the concerted shift of a single electron and a single proton are often called Concerted Proton-Electron Transfer or CPET . [ 2 ] [ 3 ] [ 4 ] [ 5 ]
In PCET, the proton and the electron (i) start from different orbitals and (ii) are transferred to different atomic orbitals . They transfer in a concerted elementary step. CPET contrast to step-wise mechanisms in which the electron and proton are transferred sequentially. [ 6 ]
PCET is thought to be pervasive. Important examples include water oxidation in photosynthesis , nitrogen fixation , oxygen reduction reaction , and the function of hydrogenases . These processes are relevant to respiration .
Reactions of relatively simple coordination complexes have been examined as tests of PCET.
Although it is relatively simple to demonstrate that the electron and proton begin and end in different orbitals, it is more difficult to prove that they do not move sequentially. The main evidence that PCET exists is that a number of reactions occur faster than expected for the sequential pathways. In the initial electron transfer (ET) mechanism, the initial redox event has a minimum thermodynamics barrier associate with the first step. Similarly, the initial proton transfer (PT) mechanism has a minimum barrier associated with the protons initial pK a . Variations on these minimum barriers are also considered. The important finding is that there are a number of reactions with rates greater than these minimum barriers would permit. This suggests a third mechanism lower in energy; the concerted PCET has been offered as this third mechanism. This assertion has also been supported by the observation of unusually large kinetic isotope effects (KIE).
A typical method for establishing PCET pathway is to show that the individual ET and PT pathways operate at higher activation energy than the concerted pathway. [ 2 ]
SOD2 uses cyclic proton-coupled electron transfer reactions to convert superoxide (O 2 •- ) into either oxygen (O 2 ) or hydrogen peroxide (H 2 O 2 ), depending on the oxidation state of the manganese metal and the protonation status of the active site.
Mn 3+ + O 2 •- ↔ Mn 2+ + O 2
Mn 2+ + O 2 •- + 2H + ↔ Mn 3+ + H 2 O 2
The protons of the active site have been directly visualized and revealed that SOD2 utilizes proton transfers between a glutamine residue and a Mn-bound solvent molecule in concert with its electron transfers. [ 8 ] During the Mn 3+ to Mn 2+ redox reaction, Gln143 donates an amide proton to hydroxide bound to the Mn and forms an amide anion. The amide anion is stabilized by short-strong hydrogen bonds (SSHBs) with the Mn-bound solvent and the nearby Trp123 residue. For the Mn 2+ to Mn 3+ redox reaction, the proton is donated back to the glutamine to reform the neutral amide state. The fast and efficient PCET catalysis of SOD2 is explained by the use of a proton that is always present and never lost to bulk solvent.
Hydrogen atom transfer (HAT) is distinct from PCET. In HAT, the proton and electron start in the same orbitals and move together to the final orbital. HAT is recognized as a radical pathway, although the stoichiometry is similar to that for PCET. | https://en.wikipedia.org/wiki/Proton-coupled_electron_transfer |
A proton-exchange membrane , or polymer-electrolyte membrane ( PEM ), is a semipermeable membrane generally made from ionomers and designed to conduct protons while acting as an electronic insulator and reactant barrier, e.g. to oxygen and hydrogen gas. [ 1 ] This is their essential function when incorporated into a membrane electrode assembly (MEA) of a proton-exchange membrane fuel cell or of a proton-exchange membrane electrolyser : separation of reactants and transport of protons while blocking a direct electronic pathway through the membrane.
PEMs can be made from either pure polymer membranes or from composite membranes, where other materials are embedded in a polymer matrix. One of the most common and commercially available PEM materials is the fluoropolymer (PFSA) [ 2 ] Nafion , a DuPont product. [ 3 ] While Nafion is an ionomer with a perfluorinated backbone like Teflon , [ 4 ] there are many other structural motifs used to make ionomers for proton-exchange membranes. Many use polyaromatic polymers, while others use partially fluorinated polymers.
Proton-exchange membranes are primarily characterized by proton conductivity (σ), methanol permeability ( P ), and thermal stability. [ 5 ]
PEM fuel cells use a solid polymer membrane (a thin plastic film) which is permeable to protons when it is saturated with water, but it does not conduct electrons.
Early proton-exchange membrane technology was developed in the early 1960s by Leonard Niedrach and Thomas Grubb, chemists working for the General Electric Company . [ 6 ] Significant government resources were devoted to the study and development of these membranes for use in NASA's Project Gemini spaceflight program. [ 7 ] A number of technical problems led NASA to forego the use of proton-exchange membrane fuel cells in favor of batteries as a lower capacity but more reliable alternative for Gemini missions 1–4. [ 8 ] An improved generation of General Electric's PEM fuel cell was used in all subsequent Gemini missions, but was abandoned for the subsequent Apollo missions. [ 9 ] The fluorinated ionomer Nafion , which is today the most widely utilized proton-exchange membrane material, was developed by DuPont plastics chemist Walther Grot. Grot also demonstrated its usefulness as an electrochemical separator membrane. [ 10 ]
In 2014, Andre Geim of the University of Manchester published initial results on atom thick monolayers of graphene and boron nitride which allowed only protons to pass through the material, making them a potential replacement for fluorinated ionomers as a PEM material. [ 11 ] [ 12 ]
PEMFCs have some advantages over other types of fuel cells such as solid oxide fuel cells (SOFC). PEMFCs operate at a lower temperature, are lighter and more compact, which makes them ideal for applications such as cars.
However, some disadvantages are: the ~80 °C operating temperature is too low for cogeneration like in SOFCs, and that the electrolyte for PEMFCs must be water-saturated. However, some fuel-cell cars, including the Toyota Mirai , operate without humidifiers, relying on rapid water generation and the high rate of back-diffusion through thin membranes to maintain the hydration of the membrane, as well as the ionomer in the catalyst layers.
High-temperature PEMFCs operate between 100 °C and 200 °C, potentially offering benefits in electrode kinetics and heat management, and better tolerance to fuel impurities, particularly CO in reformate. These improvements potentially could lead to higher overall system efficiencies. However, these gains have yet to be realized, as the gold-standard perfluorinated sulfonic acid (PFSA) membranes lose function rapidly at 100 °C and above if hydration drops below ~100%, and begin to creep in this temperature range, resulting in localized thinning and overall lower system lifetimes. As a result, new anhydrous proton conductors, such as protic organic ionic plastic crystals (POIPCs) and protic ionic liquids , are actively studied for the development of suitable PEMs. [ 13 ] [ 14 ] [ 15 ]
The fuel for the PEMFC is hydrogen, and the charge carrier is the hydrogen ion (proton). At the anode, the hydrogen molecule is split into hydrogen ions (protons) and electrons. The hydrogen ions permeate across the electrolyte to the cathode, while the electrons flow through an external circuit and produce electric power. Oxygen, usually in the form of air, is supplied to the cathode and combines with the electrons and the hydrogen ions to produce water. The reactions at the electrodes are as follows:
The theoretical exothermic potential is +1.23 V overall.
The primary application of proton-exchange membranes is in PEM fuel cells. These fuel cells have a wide variety of commercial and military applications including in the aerospace, automotive, and energy industries. [ 9 ] [ 16 ]
Early PEM fuel cell applications were focused within the aerospace industry. The then-higher capacity of fuel cells compared to batteries made them ideal as NASA's Project Gemini began to target longer duration space missions than had previously been attempted. [ 9 ]
As of 2008 [update] , the automotive industry as well as personal and public power generation are the largest markets for proton-exchange membrane fuel cells. [ 17 ] PEM fuel cells are popular in automotive applications due to their relatively low operating temperature and their ability to start up quickly even in below-freezing conditions. [ 18 ] As of March 2019 there were 6,558 fuel cell vehicles on the road in the United States, with the Toyota Mirai being the most popular model. [ 19 ] PEM fuel cells have seen successful implementation in other forms of heavy machinery as well, with Ballard Power Systems supplying forklifts based on the technology. [ 20 ] The primary challenge facing automotive PEM technology is the safe and efficient storage of hydrogen, currently an area of high research activity. [ 18 ]
Polymer electrolyte membrane electrolysis is a technique by which proton-exchange membranes are used to decompose water into hydrogen and oxygen gas. [ 21 ] The proton-exchange membrane allows for the separation of produced hydrogen from oxygen, allowing either product to be exploited as needed. This process has been used variously to generate hydrogen fuel and oxygen for life-support systems in vessels such as US and Royal Navy submarines. [ 9 ] A recent example is the construction of a 20 MW Air Liquide PEM electrolyzer plant in Québec. [ 22 ] Similar PEM-based devices are available for the industrial production of ozone. [ 23 ] | https://en.wikipedia.org/wiki/Proton-exchange_membrane |
In physics , the proton-to-electron mass ratio (symbol μ or β ) is the rest mass of the proton (a baryon found in atoms ) divided by that of the electron (a lepton found in atoms), a dimensionless quantity , namely:
The number in parentheses is the measurement uncertainty on the last two digits, corresponding to a relative standard uncertainty of 1.7 × 10 −11 . [ 1 ]
μ is an important fundamental physical constant because:
Astrophysicists have tried to find evidence that μ has changed over the history of the universe. (The same question has also been asked of the fine-structure constant .) One interesting cause of such change would be change over time in the strength of the strong force .
Astronomical searches for time-varying μ have typically examined the Lyman series and Werner transitions of molecular hydrogen which, given a sufficiently large redshift , occur in the optical region and so can be observed with ground-based spectrographs .
If μ were to change, then the change in the wavelength λ i of each rest frame wavelength can be parameterised as:
where Δ μ / μ is the proportional change in μ and K i is a constant which must be calculated within a theoretical (or semi-empirical) framework.
Reinhold et al. (2006) reported a potential 4 standard deviation variation in μ by analysing the molecular hydrogen absorption spectra of quasars Q0405-443 and Q0347-373. They found that Δ μ / μ = (2.4 ± 0.6) × 10 −5 . King et al. (2008) reanalysed the spectral data of Reinhold et al. and collected new data on another quasar, Q0528-250. They estimated that Δ μ / μ = (2.6 ± 3.0) × 10 −6 , different from the estimates of Reinhold et al. (2006).
Murphy et al. (2008) used the inversion transition of ammonia to conclude that | Δ μ / μ | < 1.8 × 10 −6 at redshift z = 0.68 . Kanekar (2011) used deeper observations of the inversion transitions of ammonia in the same system at z = 0.68 towards 0218+357 to obtain | Δ μ / μ | < 3 × 10 −7 .
Bagdonaite et al. (2013) used methanol transitions in the spiral lensing galaxy PKS 1830-211 to find ∆ μ / μ = (0.0 ± 1.0) × 10 −7 at z = 0.89 . [ 2 ] [ 3 ] Kanekar et al. (2015) used near-simultaneous observations of multiple methanol transitions in the same lens, to find ∆ μ / μ < 1.1 × 10 −7 at z = 0.89 . Using three methanol lines with similar frequencies to reduce systematic effects, Kanekar et al. (2015) obtained ∆ μ / μ < 4 × 10 −7 .
Note that any comparison between values of Δ μ / μ at substantially different redshifts will need a particular model to govern the evolution of Δ μ / μ . That is, results consistent with zero change at lower redshifts do not rule out significant change at higher redshifts. | https://en.wikipedia.org/wiki/Proton-to-electron_mass_ratio |
Proton-transfer-reaction mass spectrometry ( PTR-MS ) is an analytical chemistry technique that uses gas phase hydronium reagent ions which are produced in an ion source . [ 1 ] PTR-MS is used for online monitoring of volatile organic compounds (VOCs) in ambient air and was developed in 1995 by scientists at the Institut für Ionenphysik at the Leopold-Franzens University in Innsbruck , Austria. [ 2 ] A PTR-MS instrument consists of an ion source that is directly connected to a drift tube (in contrast to SIFT-MS no mass filter is interconnected) and an analyzing system ( quadrupole mass analyzer or time-of-flight mass spectrometer ). Commercially available PTR-MS instruments have a response time of about 100 ms and reach a detection limit in the single digit pptv or even ppqv region. Established fields of application are environmental research, food and flavor science, biological research, medicine, security, cleanroom monitoring, etc. [ 1 ]
With H 3 O + as the reagent ion the proton transfer process is (with R being the trace component)
Reaction ( 1 ) is only possible if energetically allowed, i.e. if the proton affinity of R is higher than the proton affinity of H 2 O (691 kJ/mol [ 3 ] ). As most components of ambient air possess a lower proton affinity than H 2 O (e.g. N 2 , O 2 , Ar , CO 2 , etc.) the H 3 O + ions only react with VOC trace components and the air itself acts as a buffer gas . Moreover, due to the low concentrations of trace components one can assume that the total number of H 3 O + ions remains nearly unchanged, which leads to the equation [ 4 ]
In equation ( 2 ) [ RH + ] {\displaystyle {\ce {[RH+]}}} is the density of product ions, [ H 3 O + ] 0 {\displaystyle {\ce {[H3O+]0}}} is the density of reagent ions in absence of reactant molecules in the buffer gas, k is the reaction rate constant and t is the average time the ions need to pass the reaction region. With a PTR-MS instrument the number of product and of reagent ions can be measured, the reaction rate constant can be found in literature for most substances [ 5 ] and the reaction time can be derived from the set instrument parameters. Therefore, the absolute concentration of trace constituents [ R ] {\displaystyle {\ce {[R]}}} can be easily calculated without the need of calibration or gas standards. Furthermore, it gets obvious that the overall sensitivity of a PTR-MS instrument is dependent on the reagent ion yield. Fig. 1 gives an overview of several published (in peer-reviewed journals ) reagent ion yields during the last decades and the corresponding sensitivities.
In commercial PTR-MS instruments water vapor is ionized in a hollow cathode discharge:
After the discharge a short drift tube is used to form very pure (>99.5% [ 4 ] ) H 3 O + via ion-molecule reactions:
Due to the high purity of the reagent ions a mass filter between the ion source and the reaction drift tube is not necessary and H 3 O + can be injected directly. The absence of this mass filter in turn greatly reduces losses of reagent ions and leads eventually to an outstandingly low detection limit of the whole instrument.
In the reaction drift tube a vacuum pump is continuously drawing through air containing the VOCs one wants to analyze. At the end of the drift tube the protonated molecules are mass analyzed ( quadrupole mass analyzer or time-of-flight mass spectrometer ) and detected.
As an alternative to H 3 O + already in early PTR-MS related publications the use of NH 4 + reagent ions has been suggested. [ 4 ] Ammonia has a proton affinity of 853.6 kJ/mol. [ 6 ] For compounds that have a higher proton affinity than ammonia proton transfer can take place similar to the process described above for hydronium:
Additionally, for compounds with higher, but also for some with lower proton affinities than ammonia a clustering reaction can be observed
where the cluster needs a third body to get collisionally stabilized.
The main advantage of using NH 4 + reagent ions is that fragmentation of analytes upon chemical ionization is strongly suppressed, leading to straightforward mass spectra even for complex mixtures.
The reason why during the first 20 years after the invention of PTR-MS NH 4 + reagent ions have only been used in a very limited number of studies is most probably because the NH 4 + production required toxic and corrosive ammonia as a source gas. This led to problems with handling the instrument and its exhaust gas, as well as to increased wear of vacuum components. In 2017 a patent application was submitted where the inventors introduced a novel method of NH 4 + production without the need of any form of ammonia. [ 7 ] In this method N 2 and water vapor are introduced into the hollow cathode ion source and by adjusting electric fields and pressures NH 4 + can be produced at the same or even higher purity levels than H 3 O + . It is expected that this invention, which eliminates the problems connected to the use of NH 4 + so far, will lead to a widespread use of NH 4 + reagent ions in the near future. [ 8 ]
Advantages include low fragmentation – only a small amount of energy is transferred during the ionization process (compared to e.g. electron ionization ), therefore fragmentation is suppressed and the obtained mass spectra are easily interpretable, no sample preparation is necessary – VOC containing air and liquids' headspaces can be analyzed directly, real-time measurements – with a typical response time of 100 ms VOCs can be monitored on-line, real-time quantification – absolute concentrations are obtained directly without previous calibration measurements, compact and robust setup – due to the simple design and the low number of parts needed for a PTR-MS instrument, it can be built in into space saving and even mobile housings, easy to operate – for the operation of a PTR-MS only electric power and a small amount of distilled water are needed. Unlike other techniques no gas cylinders are needed for buffer gas or calibration standards.
One disadvantage is that not all molecules are detectable. Because only molecules with a proton affinity higher than water can be detected by PTR-MS, proton transfer from H 3 O + is not suitable for all fields of application. Therefore, in 2009 first PTR-MS instruments were presented, which are capable of switching between H 3 O + and O 2 + (and NO + ) as reagent ions. [ 9 ] This enhances the number of detectable substances to important compounds like ethylene , acetylene , most halocarbons , etc. Furthermore, particularly with NO + it is possible to separate and independently quantify some isomers . [ 9 ] In 2012 a PTR-MS instrument was introduced which extends the selectable reagent ions to Kr + and Xe + ; [ 10 ] this should allow for the detection of nearly all possible substances (up to the ionization energy of krypton (14 eV [ 11 ] )). Although the ionization method for these additional reagent ions is charge-exchange rather than proton-transfer ionization the instruments can still be considered as "classic" PTR-MS instruments, i.e. no mass filter between the ion source and the drift tube and only some minor modifications on the ion source and vacuum design.
The maximum measurable concentration is limited. Equation (2) is based on the assumption that the decrease of reagent ions is negligible, therefore the total concentration of VOCs in air must not exceed about 10 ppmv . Otherwise the instrument's response will not be linear anymore and the concentration calculation will be incorrect. This limitation can be overcome easily by diluting the sample with a well-defined amount of pure air.
As it is the case for most analytical instruments , also in PTR-MS there has always been a quest for sensitivity improvement and for lowering the detection limit. However, until 2012 these improvements were limited to optimizations of the conventional setup, i.e. ion source, DC drift tube, transfer lens system, mass spectrometer (compare above). The reason for this conservative approach was that the addition of any RF ion focusing device negatively affects the well-defined PTR-MS ion chemistry, which makes quantification complicated and considerably limits comparability of measurement results obtained with different instruments. Only in 2016 a patent application providing a solution to this problem was submitted. [ 12 ]
Ion funnels are RF devices which have been used for decades to focus ion currents into narrow beams. In PTR-MS they have been introduced in 2012 by Barber et al. [ 13 ] when they presented a PTR-MS setup with a PTR reaction region incorporating an ion funnel. Although the focusing properties of the ion funnel improved the sensitivity of the setup by a factor of >200 (compared to operating in DC only mode, i.e. with the ion funnel turned off) for some compounds, the sensitivities of other compounds were only improved by a factor of <10. [ 13 ] That is, because of the highly compound dependent instrumental response one of the main advantages of PTR-MS, namely that concentration values can be directly calculated, is lost and a calibration measurement is needed for each analyte of interest. Furthermore, with this approach unusual fragmentation of analytes has been observed [ 14 ] which complicates interpretation of measurement results and comparison between different types of instruments even more. A different concept has been introduced by the company IONICON Analytik GmbH. [ 15 ] (Innsbruck, AT) where the ion funnel is not predominantly part of the reaction region but mainly for focusing the ions into the transfer region to the TOF mass spectrometer. [ 16 ] In combination with the above-mentioned method of controlling the ion chemistry [ 12 ] this enables a considerable increase in sensitivity and thus also an improvement of the detection limit, while keeping the ion chemistry well-defined and thus avoiding problems with quantification and interpretation of the results.
Quadrupole , hexapole and other multipole ion guides can be used to transfer ions between different parts of an instrument with high efficiency. In PTR-MS they are particularly suitable for being installed in the differentially pumped interface between the reaction region and the mass spectrometer. In 2014 Sulzer et al. [ 17 ] published an article about a PTR-MS instrument which utilizes a quadrupole ion guide between the drift tube and the TOF mass spectrometer. They reported an increase in sensitivity by a factor of 25 compared to a similar instrument without an ion guide.
Quadrupole ion guides are known to have high focusing power, but also rather narrow m/z transmission bands. [ 18 ] Hexapole ion guides on the other hand have focusing capabilities over a broader m/z band. Additionally, less energy is put into the transmitted ions, i.e. fragmentation and other adverse effects are less likely to occur. Consequently, some latest high-end PTR-MS instruments are equipped with hexapole ion guides for considerably improved performance [ 16 ] or even with a sequential arrangement of an ion funnel followed by a hexapole ion guide for even higher sensitivity and lower detection limit. [ 19 ]
As a real-time trace gas analysis method based on mass spectrometry, PTR-MS has two obvious limitations: Isomers cannot be easily separated (for some it is possible by switching the reagent ions [ 9 ] or by changing the reduced electric field strength in the drift tube) and the sample has to be in the gas phase . Countermeasures against these limitations have been developed in the form of add-ons, which can either be installed into the PTR-MS instrument or operated as external devices.
Gas chromatography (GC) in combination with mass spectrometry ( GC-MS ) is capable of separating isomeric compounds. Although GC has been successfully coupled to PTR-MS in the past, [ 20 ] this approach annihilates the real-time capability of the PTR-MS technology, because a single GC analysis run typically takes between 30 min and 1 h. Thus, state-of-the-art GC add-ons for PTR-MS are based on fastGC technology. Materic et al. [ 21 ] utilized an early version of a commercially available fastGC addon in order to distinguish various monoterpene isomers. Within a fastGC run of about 70 s they were able to separate and identify: alpha -pinene , beta -pinene , camphene , myrcene , 3-carene and limonene in a standard mixture, Norway spruce , Scots pine and black pine samples, respectively. Particularly, if the operation mode of a PTR-MS instrument equipped with fastGC is continuously switched between fastGC and direct injection (dependent on the application, e.g. a loop sequence of one fastGC run followed by 10 min of direct injection measurement), real-time capability is preserved, while at the same time valuable information on substance identification and isomer separation is acquired.
Researchers at the Leopold-Franzens University in Innsbruck invented a dedicated PTR-MS inlet system for the analysis of aerosols and particulate matter , [ 22 ] which they called "CHemical Analysis of aeRosol ON-line (CHARON)". After further development work in collaboration with a PTR-MS manufacturer, CHARON has become readily available as an add-on for PTR-MS instruments in 2017. [ 23 ] The add-on consists of a honeycomb activated charcoal denuder which adsorbs organic gases but transmits particles, an aerodynamic lens system that collimates sub-μm particles, and a thermo-desorber that evaporates non-refractory organic particulate matter at moderate temperatures of 100-160 °C and reduced pressures of a few mbar.
So far, CHARON has predominantly been used within studies in the field of atmospheric chemistry , e.g. for airborne measurements of particulate organic matter [ 24 ] and bulk organic aerosol analysis. [ 25 ]
A now well established setup for the controlled evaporation and subsequent analysis of liquids with PTR-MS has been published in 2013 by Fischer et al. [ 26 ] As the authors saw the main application of their setup in the calibration of PTR-MS instruments via aqueous standards, they named it "Liquid Calibration Unit (LCU)". The LCU sprays a liquid standard into a gas stream at well-defined flow rates via a purpose-built nebulizer (optimized for reduced probability of clogging and high tolerance to salts in the liquid). The resulting micro-droplets are injected into a heated (> 100 °C) evaporation chamber. This concept offers two main advantages: (i) the evaporation of compounds is enhanced by the enlarged surface area of the droplets and (ii) compounds which are dissociated in water, such as acids (or bases ), experience a shift in pH value when the water evaporates from a droplet. This in turn reduces dissociation and supports total evaporation of the compound. [ 26 ] The resulting continuous gas flow containing the analytes can be directly introduced into a PTR-MS instrument for analysis.
The most common applications for the PTR-MS technique are environmental research , [ 27 ] [ 28 ] [ 29 ] waste incineration , food science , [ 30 ] biological research , [ 31 ] process monitoring , indoor air quality , [ 32 ] [ 33 ] [ 34 ] medicine and biotechnology [ 35 ] [ 36 ] [ 37 ] [ 38 ] and Homeland security . [ 39 ] [ 40 ] Trace gas analysis is another common application. Some other techniques are Secondary electrospray ionization (SESI), Electrospray ionization (ESI), and Selected-ion flow-tube mass spectrometry (SIFT).
Fig. 2 shows a typical PTR-MS measurement performed in food and flavor research. The test person swallows a sip of a vanillin flavored drink and breathes via his nose into a heated inlet device coupled to a PTR-MS instrument. Due to the high time resolution and sensitivity of the instrument used here, the development of vanillin in the person's breath can be monitored in real-time (please note that isoprene is shown in this figure because it is a product of human metabolism and therefore acts as an indicator for the breath cycles). The data can be used for food design, i.e. for adjusting the intensity and duration of vanillin flavor tasted by the consumer.
Another example for the application of PTR-MS in food science was published in 2008 by C. Lindinger et al. [ 42 ] in Analytical Chemistry . This publication found great response even in non-scientific media. [ 43 ] [ 44 ] Lindinger et al. developed a method to convert "dry" data from a PTR-MS instrument that measured headspace air from different coffee samples into expressions of flavor (e.g. "woody", "winey", "flowery", etc.) and showed that the obtained flavor profiles matched nicely to the ones created by a panel of European coffee tasting experts.
In Fig. 3 a mass spectrum of air inside a laboratory (obtained with a time-of-flight (TOF) based PTR-MS instrument), is shown. The peaks on m/z 19, 37 and 55 (and their isotopes ) represent the reagent ions (H 3 O + ) and their clusters. On m/z 30 and 32 NO + and O 2 + , which are both impurities originating from the ion source, appear. All other peaks correspond to compounds present in typical laboratory air (e.g. high intensity of protonated acetone on m/z 59). If one takes into account that virtually all peaks visible in Fig. 3 are in fact double, triple or multiple peaks ( isobaric compounds) it becomes obvious that for PTR-MS instruments selectivity is at least as important as sensitivity, especially when complex samples / compositions are analyzed. One methods for improving the selectivity is high mass resolution. When the PTR source is coupled to a high resolution mass spectrometer isobaric compounds can be distinguished and substances can be identified via their exact mass. [ 45 ] Some PTR-MS instruments are, despite the lack of a mass filter between the ion source and the drift tube, capable of switching the reagent ions (e.g. to NO + or O 2 + ). With the additional information obtained by using different reagent ions a much higher level of selectivity can be reached, e.g. some isomeric molecules can be distinguished. [ 9 ] | https://en.wikipedia.org/wiki/Proton-transfer-reaction_mass_spectrometry |
The proton affinity (PA, E pa ) of an anion or of a neutral atom or molecule is the negative of the enthalpy change in the reaction between the chemical species concerned and a proton in the gas phase: [ 1 ]
These reactions are always exothermic in the gas phase, i.e. energy is released ( enthalpy is negative) when the reaction advances in the direction shown above, while the proton affinity is positive. This is the same sign convention used for electron affinity . The property related to the proton affinity is the gas-phase basicity, which is the negative of the Gibbs energy for above reactions, [ 2 ] i.e. the gas-phase basicity includes entropic terms in contrast to the proton affinity.
The higher the proton affinity, the stronger the base and the weaker the conjugate acid in the gas phase . The (reportedly) strongest known base is the ortho-diethynylbenzene dianion ( E pa = 1843 kJ/mol), [ 3 ] followed by the methanide anion ( E pa = 1743 kJ/mol) and the hydride ion ( E pa = 1675 kJ/mol), [ 4 ] making methane the weakest proton acid [ 5 ] in the gas phase, followed by dihydrogen . The weakest known base is the helium atom ( E pa = 177.8 kJ/mol), [ 6 ] making the hydrohelium(1+) ion the strongest known proton acid.
Proton affinities illustrate the role of hydration in aqueous-phase Brønsted acidity . Hydrofluoric acid is a weak acid in aqueous solution (p K a = 3.15) [ 7 ] but a very weak acid in the gas phase ( E pa (F − ) = 1554 kJ/mol): [ 4 ] the fluoride ion is as strong a base as SiH 3 − in the gas phase, but its basicity is reduced in aqueous solution because it is strongly hydrated, and therefore stabilized. The contrast is even more marked for the hydroxide ion ( E pa = 1635 kJ/mol), [ 4 ] one of the strongest known proton acceptors in the gas phase. Suspensions of potassium hydroxide in dimethyl sulfoxide (which does not solvate the hydroxide ion as strongly as water) are markedly more basic than aqueous solutions, and are capable of deprotonating such weak acids as triphenylmethane (p K a = ca. 30). [ 8 ] [ 9 ]
To a first approximation, the proton affinity of a base in the gas phase can be seen as offsetting (usually only partially) the extremely favorable hydration energy of the gaseous proton (Δ E = −1530 kJ/mol), as can be seen in the following estimates of aqueous acidity:
These estimates suffer from the fact the free energy change of dissociation is in effect the small difference of two large numbers. However, hydrofluoric acid is correctly predicted to be a weak acid in aqueous solution and the estimated value for the p K a of dihydrogen is in agreement with the behaviour of saline hydrides (e.g., sodium hydride ) when used in organic synthesis .
Both proton affinity and pK a are measures of the acidity of a molecule, and so both reflect the thermodynamic gradient between a molecule and the anionic form of that molecule upon removal of a proton from it. Implicit in the definition of pK a however is that the acceptor of this proton is water, and an equilibrium is being established between the molecule and bulk solution. More broadly, pK a can be defined with reference to any solvent, and many weak organic acids have measured pK a values in DMSO. Large discrepancies between pK a values in water versus DMSO (i.e., the pK a of water in water is 14, [ 12 ] [ 13 ] but water in DMSO is 32) demonstrate that the solvent is an active partner in the proton equilibrium process, and so pK a does not represent an intrinsic property of the molecule in isolation. In contrast, proton affinity is an intrinsic property of the molecule, without explicit reference to the solvent.
A second difference arises in noting that pK a reflects a thermal free energy for the proton transfer process, in which both enthalpic and entropic terms are considered together. Therefore, pK a is influenced both by the stability of the molecular anion, as well as the entropy associated of forming and mixing new species. Proton affinity, on the other hand, is not a measure of free energy.
Proton affinities are quoted in kJ/mol , in increasing order of gas-phase basicity of the base. | https://en.wikipedia.org/wiki/Proton_affinity |
Proton capture is a nuclear reaction in which an atomic nucleus and one or more protons collide and merge to form a heavier nucleus.
Since protons have positive electric charge, they are repelled electrostatically by the positively charged nucleus. Therefore, it is more difficult for protons to enter the nucleus compared to neutrally charged neutrons .
Proton capture plays an important role in the cosmic nucleosynthesis of proton-rich isotopes. [ 1 ] In stars it can proceed in two ways: as a rapid ( rp-process ) or a slow process ( p-process ). It is also key to CNO fusion reactions in stars.
This process makes lithium in stars to get converted into helium in main-sequence stars.
This nuclear physics or atomic physics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Proton_capture |
A proton conductor is an electrolyte , typically a solid electrolyte , in which H + [ 1 ] are the primary charge carriers.
Acid solutions exhibit proton-conductivity , while pure proton conductors are usually dry solids. Typical materials are polymers or ceramic. Typically, the pores in practical materials are small such that protons dominate direct current and transport of cations or bulk solvent is prevented. Water ice is a common example of a pure proton conductor, albeit a relatively poor one. [ 2 ] A special form of water ice, superionic water , has been shown to conduct much more efficiently than normal water ice. [ 3 ]
Solid-phase proton conduction was first suggested by Alfred Rene Jean Paul Ubbelohde and S. E. Rogers. in 1950, [ 4 ] although electrolyte proton currents have been recognized since 1806.
Proton conduction has also been observed in the new type of proton conductors for fuel cells – protic organic ionic plastic crystals (POIPCs), such as 1,2,4-triazolium perfluorobutanesulfonate [ 5 ] and imidazolium methanesulfonate. [ 6 ] In particular, a high ionic conductivity of 10 mS/cm is reached at 185 °C in the plastic phase of imidazolium methanesulfonate.
When in the form of thin membranes , proton conductors are an essential part of small, inexpensive fuel cells . The polymer nafion is a typical proton conductor in fuel cells. A jelly-like substance similar to Nafion residing in the ampullae of Lorenzini of sharks has proton conductivity only slightly lower than nafion. [ 7 ] [ 8 ]
High proton conductivity has been reported among alkaline-earth cerates and zirconate based perovskite materials such as acceptor doped SrCeO 3 , BaCeO 3 and BaZrO 3 . [ 9 ] Relatively high proton conductivity has also been found in rare-earth ortho-niobates and ortho-tantalates as well as rare-earth tungstates. [ 10 ]
This electrochemistry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Proton_conductor |
Proton-coupled amino acid transporters belong to the SLC26A5 family; they are protein receptors whose main function is the transmembrane movement of amino acids and their derivatives. This family of receptors is most commonly found within the luminal surface of the small intestine as well as in some lysosomes. The solute carrier family (SLC) of genes includes roughly 400 membrane proteins that are characterized by 66 families in total. The SLC36 family of genes maps to chromosome 11. The diversity of these receptors is vast, with the ability to transport both charged and uncharged amino acids along with their derivatives. In research and practice, SLC36A1/2 are both targets for drug-based delivery systems for a wide range of disorders.
The human protein acid transporter (hPAT1) is 5585 base pairs long and codes for a protein 476 amino acids long. The transporter has nine transmembrane regions where the amino terminus faces the cytoplasm. The rat protein acid transporter (rPAT1) has been widely studied and an 85% amino acid sequence match was found between hPAT1 and rPAT1. The hPAT1 gene is located on chromosome 5q31-33 and has 11 exons that are coding regions. Its translation site begins in exon 2 and exon 11 contains the termination site. [ 1 ]
The molecular weight of Proton-coupled amino acid transporter 1 is 53.28 kDA; the molecular weight of Proton-coupled amino acid transporter 2 is 53.22 kDA. PAT1 has been found in lysosomes in brain neurons but also in the apical membrane of intestinal epithelial cells where it is associated with the brush border. Proton-coupled amino acid transporter 1 has a higher affinity for proline than it does for glycine and alanine Proton-coupled amino acid transporter 2 is found subcellularly in the kidneys, lungs, spinal cord, and brain and likely has a role in myelinating neurons. [ 2 ] It has an overall higher affinity for glycine, alanine, and proline than PAT1 but is more specific for what can inhibit it. [ 3 ] [ 4 ]
Unlike most amino acid transporters in the exchange of Na+ with amino acid symporters, proton-coupled amino acid transporters function as H+ with amino acid symporters. They are located within the luminal surface of the small intestine and within lysosomes, so their action functions in absorption in the intestine and in the efflux pathway after intralysosomal digestion. Unlike typical mammalian amino acid transporters which function in exchanging Na+/amino acid symporters, these- transporters function in exchanging H+/amino acid symporters. The activity of transporters, such as Proton-coupled amino acid transporter 1 and Proton-coupled amino acid transporter 2 can be measured at the apical membrane of the human epithelial layer of cells which are loaded with pH sensitive dyes. The change in membrane potential can be measured by the absorption of pH sensitive dyes and the associated influx of H+ ions. The proteins involved in these transporters are consider anion exchangers
The function of proton-coupled amino acid transporters is the transmembrane movement of amino acids and their derivatives for absorption by the luminal surface of the small intestine or digestion by intralysosomal proteins. In Drosophila models, the expression of SLC family genes that code for proton-coupled amino acid transporters is directly linked to the nutrient-dependent growth. In humans, similar expression patterns are observed and their function correlates to their location anatomically. Being located within the lamina of the small intestine allows for functional absorption of transported amino acids and derivatives. The majority of nutrient absorption takes place within this region of the intestines, and makes sense that these transporters are located throughout this tissue.
In hereditary disease iminoglycinuria, there is a defect in the human proton-coupled amino acid transporter 1 and 2 genes which results in a defect in the absorption of proline and glycine. Iminoglycinuria is an autosomal recessive disorder of the renal tubular. Lack of glycine and proline absorption leads to excess urinary excretions containing amino acids. If the transporters are not working properly, a drug that they usually help gain entry in to the cell might not be absorbed [ 3 ] Their function can also be inhibited by tryptophan derivatives and allow for exploration into the function of hPAT1 and hPAT2. Additionally, mutations that lead to structural changes in amino acid binding sites play a role in their functional transport. [ 5 ]
The DNA sequence of these transporters is transcribed in the nucleus of the cell by RNA polymerase and undergoes splicing and capping before it travels to the cytoplasm. In the cytoplasm, translation begins via a sequence in exon 2 of the mRNA. Subsequently, protein folding and packaging insert the transporter into the membrane. The protein has a signal recognition particle that is recognized as it leaves the ribosome. N-glycosylation at various sites on hPAT1 is necessary for its transport function. Three of its extracellular residues are glycosylated and determine transport efficacy. [ 6 ]
PAT1 mRNA is expressed in the GI tract between the stomach and descending colon, but is generally absent in the esophagus, caecum, and rectum. This allows for different treatments that affect the affinity of the carrier protein for its substrates, giving the potential to treat various amino-acid related diseases. HPAT1 and HPAT2 are important in the absorption of certain drugs, especially pharmaceutically active amino acids derivatives. [ 3 ] They have also been targeted with medications used as anticonvulsants, for prostate cancer, and for bladder cancer. [ 7 ] HPAT1 and 2 are integral to the central nervous system because they transport GABA and its analogues which can induce and inhibitory and excitatory effect in the brain. [ 1 ] | https://en.wikipedia.org/wiki/Proton_coupled_amino_acid_transporter |
In particle physics , proton decay is a hypothetical form of particle decay in which the proton decays into lighter subatomic particles , such as a neutral pion and a positron . [ 1 ] The proton decay hypothesis was first formulated by Andrei Sakharov in 1967. Despite significant experimental effort, proton decay has never been observed. If it does decay via a positron, the proton's half-life is constrained to be at least 1.67 × 10 34 years . [ 2 ]
According to the Standard Model , the proton, a type of baryon , is stable because baryon number ( quark number ) is conserved (under normal circumstances; see Chiral anomaly for an exception). Therefore, protons will not decay into other particles on their own, because they are the lightest (and therefore least energetic) baryon. Positron emission and electron capture —forms of radioactive decay in which a proton becomes a neutron—are not proton decay, since the proton interacts with other particles within the atom.
Some beyond-the-Standard-Model grand unified theories (GUTs) explicitly break the baryon number symmetry, allowing protons to decay via the Higgs particle , magnetic monopoles , or new X bosons with a half-life of 10 31 to 10 36 years. For comparison, the universe is roughly 1.38 × 10 10 years old . [ 3 ] To date, all attempts to observe new phenomena predicted by GUTs (like proton decay or the existence of magnetic monopoles ) have failed.
Quantum tunnelling may be one of the mechanisms of proton decay. [ 4 ] [ 5 ] [ 6 ]
Quantum gravity [ 7 ] (via virtual black holes and Hawking radiation ) may also provide a venue of proton decay at magnitudes or lifetimes well beyond the GUT scale decay range above, as well as extra dimensions in supersymmetry . [ 8 ] [ 9 ] [ 10 ] [ 11 ]
There are theoretical methods of baryon violation other than proton decay including interactions with changes of baryon and/or lepton number other than 1 (as required in proton decay). These included B and/or L violations of 2, 3, or other numbers, or B − L violation. Such examples include neutron oscillations and the electroweak sphaleron anomaly at high energies and temperatures that can result between the collision of protons into antileptons [ 12 ] or vice versa (a key factor in leptogenesis and non-GUT baryogenesis ).
One of the outstanding problems in modern physics is the predominance of matter over antimatter in the universe . The universe, as a whole, seems to have a nonzero positive baryon number density – that is, there is more matter than antimatter. Since it is assumed in cosmology that the particles we see were created using the same physics we measure today, it would normally be expected that the overall baryon number should be zero, as matter and antimatter should have been created in equal amounts. This has led to a number of proposed mechanisms for symmetry breaking that favour the creation of normal matter (as opposed to antimatter) under certain conditions. This imbalance would have been exceptionally small, on the order of 1 in every 10 10 particles a small fraction of a second after the Big Bang, but after most of the matter and antimatter annihilated, what was left over was all the baryonic matter in the current universe, along with a much greater number of bosons .
Most grand unified theories explicitly break the baryon number symmetry, which would account for this discrepancy, typically invoking reactions mediated by very massive X bosons ( X ) or massive Higgs bosons ( H 0 ). The rate at which these events occur is governed largely by the mass of the intermediate X or H 0 particles, so by assuming these reactions are responsible for the majority of the baryon number seen today, a maximum mass can be calculated above which the rate would be too slow to explain the presence of matter today. These estimates predict that a large volume of material will occasionally exhibit a spontaneous proton decay.
Proton decay is one of the key predictions of the various grand unified theories (GUTs) proposed in the 1970s, another major one being the existence of magnetic monopoles . Both concepts have been the focus of major experimental physics efforts since the early 1980s. To date, all attempts to observe these events have failed; however, these experiments have been able to establish lower bounds on the half-life of the proton. Currently, the most precise results come from the Super-Kamiokande water Cherenkov radiation detector in Japan: [ 13 ] a lower bound on the proton's half-life of 2.4 × 10 34 years via positron decay, and similarly, 1.6 × 10 34 years via antimuon decay, close to a supersymmetry (SUSY) prediction of 10 34 –10 36 years. [ 14 ] An upgraded version, Hyper-Kamiokande , probably will have sensitivity 5–10 times better than Super-Kamiokande.
Despite the lack of observational evidence for proton decay, some grand unification theories , such as the SU(5) Georgi–Glashow model and SO(10) , along with their supersymmetric variants, require it. According to such theories, the proton has a half-life of about 10 31 ~10 36 years and decays into a positron and a neutral pion that itself immediately decays into two gamma ray photons :
p + ⟶ e + + π 0 ↓ 2 γ {\displaystyle {\begin{aligned}{\rm {p^{+}\longrightarrow e^{+}+}}&\ \pi ^{0}\\&\downarrow \\&2\gamma \end{aligned}}}
Since a positron is an antilepton this decay preserves B − L number, which is conserved in most GUT s.
Additional decay modes are available (e.g.: p + → μ + + π 0 ), both directly and when catalyzed via interaction with GUT -predicted magnetic monopoles . [ 15 ] Though this process has not been observed experimentally, it is within the realm of experimental testability for future planned very large-scale detectors on the megaton scale. Such detectors include the Hyper-Kamiokande .
Early grand unification theories (GUTs) such as the Georgi–Glashow model, which were the first consistent theories to suggest proton decay, postulated that the proton's half-life would be at least 10 31 years . As further experiments and calculations were performed in the 1990s, it became clear that the proton half-life could not lie below 10 32 years . Many books from that period refer to this figure for the possible decay time for baryonic matter. More recent findings have pushed the minimum proton half-life to at least 10 34 –10 35 years, ruling out the simpler GUTs (including minimal SU(5) / Georgi–Glashow) and most non-SUSY models. The maximum upper limit on proton lifetime (if unstable), is calculated at 6 × 10 39 years , a bound applicable to SUSY models, [ 16 ] with a maximum for (minimal) non-SUSY GUTs at 1.4 × 10 36 years . [ 16 ] (part 5.6)
Although the phenomenon is referred to as "proton decay", the effect would also be seen in neutrons bound inside atomic nuclei. Free neutrons—those not inside an atomic nucleus—are already known to decay into protons (and an electron and an antineutrino) in a process called beta decay . Free neutrons have a half-life of 10 minutes ( 610.2 ± 0.8 s ) [ 17 ] due to the weak interaction . Neutrons bound inside a nucleus have an immensely longer half-life – apparently as great as that of the proton.
The lifetime of the proton in vanilla SU(5) can be naively estimated as τ p ∼ M X 4 / m p 5 {\textstyle \tau _{p}\sim M_{X}^{4}/m_{p}^{5}} . [ 19 ] Supersymmetric GUTs with reunification scales around µ ~ 2 × 10 16 GeV/ c 2 yield a lifetime of around 10 34 yr , roughly the current experimental lower bound.
The dimension -6 proton decay operators are q q q ℓ / Λ 2 , {\textstyle qqq\ell /\Lambda ^{2},} d c u c u c e c / Λ 2 , {\textstyle d^{c}u^{c}u^{c}e^{c}/\Lambda ^{2},} e c ¯ u c ¯ q q / Λ 2 , {\textstyle {\overline {e^{c}}}{\overline {u^{c}}}qq/\Lambda ^{2},} and d c ¯ u c ¯ q ℓ / Λ 2 , {\textstyle {\overline {d^{c}}}{\overline {u^{c}}}q\ell /\Lambda ^{2},} where Λ {\displaystyle \Lambda } is the cutoff scale for the Standard Model . All of these operators violate both baryon number ( B ) and lepton number ( L ) conservation but not the combination B − L .
In GUT models, the exchange of an X or Y boson with the mass Λ GUT can lead to the last two operators suppressed by 1 / Λ GUT 2 {\textstyle 1/\Lambda _{\text{GUT}}^{2}} . The exchange of a triplet Higgs with mass M can lead to all of the operators suppressed by 1 / M 2 {\textstyle 1/M^{2}} . See Doublet–triplet splitting problem .
In supersymmetric extensions (such as the MSSM ), we can also have dimension-5 operators involving two fermions and two sfermions caused by the exchange of a tripletino of mass M . The sfermions will then exchange a gaugino or Higgsino or gravitino leaving two fermions. The overall Feynman diagram has a loop (and other complications due to strong interaction physics). This decay rate is suppressed by 1 / M M SUSY {\textstyle 1/MM_{\text{SUSY}}} where M SUSY is the mass scale of the superpartners .
In the absence of matter parity , supersymmetric extensions of the Standard Model can give rise to the last operator suppressed by the inverse square of sdown quark mass. This is due to the dimension-4 operators q ℓ d͂ c and u c d c d͂ c .
The proton decay rate is only suppressed by 1 / M SUSY 2 {\textstyle 1/M_{\text{SUSY}}^{2}} which is far too fast unless the couplings are very small. | https://en.wikipedia.org/wiki/Proton_decay |
Proton emission (also known as proton radioactivity ) is a rare type of radioactive decay in which a proton is ejected from a nucleus . Proton emission can occur from high-lying excited states in a nucleus following a beta decay , in which case the process is known as beta-delayed proton emission, or can occur from the ground state (or a low-lying isomer ) of very proton-rich nuclei, in which case the process is very similar to alpha decay . [ citation needed ] For a proton to escape a nucleus, the proton separation energy must be negative (Sp < 0)—the proton is therefore unbound, and tunnels out of the nucleus in a finite time. The rate of proton emission is governed by the nuclear, Coulomb, and centrifugal potentials of the nucleus, where centrifugal potential affects a large part of the rate of proton emission. The half-life of a nucleus with respect to proton emission is affected by the proton energy and its orbital angular momentum. [ 1 ] Proton emission is not seen in naturally occurring isotopes; proton emitters can be produced via nuclear reactions , usually using linear particle accelerators .
Although prompt (i.e. not beta-delayed) proton emission was observed from an isomer in cobalt-53 as early as 1969, no other proton-emitting states were found until 1981, when the proton radioactive ground states of lutetium-151 and thulium-147 were observed at experiments at the GSI in West Germany. [ 2 ] Research in the field flourished after this breakthrough, and to date more than 25 isotopes have been found to exhibit proton emission. The study of proton emission has aided the understanding of nuclear deformation, masses, and structure, and it is a pure example of quantum tunneling .
In 2002, the simultaneous emission of two protons was observed from the nucleus iron-45 in experiments at GSI and GANIL ( Grand Accélérateur National d'Ions Lourds at Caen ). [ 3 ] In 2005 it was experimentally determined (at the same facility) that zinc-54 can also undergo double proton decay. [ 4 ] | https://en.wikipedia.org/wiki/Proton_emission |
The proton radius puzzle is an unanswered problem in physics relating to the size of the proton . [ 1 ] Historically the proton charge radius was measured by two independent methods, which converged to a value of about 0.877 femtometres (1 fm = 10 −15 m). This value was challenged by a 2010 experiment using a third method, which produced a radius about 4% smaller than this, at 0.842 femtometres. [ 2 ] New experimental results reported in the autumn of 2019 agree with the smaller measurement, as does a re-analysis of older data published in 2022. While some believe that this difference has been resolved, [ 3 ] [ 4 ] this opinion is not yet universally held. [ 5 ] [ 6 ]
The radius of the proton is defined by a formula which can be calculated by quantum electrodynamics and be derived from either atomic spectroscopy or by electron–proton scattering. The formula involves a form-factor related to the two-dimensional parton diameter of the proton. [ 7 ]
Prior to 2010, the proton charge radius was measured using one of two methods: one relying on spectroscopy, and one relying on nuclear scattering. [ 8 ]
The spectroscopy method compares the energy levels of spherically symmetric 2s orbitals to asymmetric 2p orbitals of hydrogen, a difference known as the Lamb shift . The exact values of the energy levels are sensitive to the distribution of charge in the nucleus since the 2s levels overlap more with the nucleus. [ 9 ] Measurements of hydrogen's energy levels are now so precise that the accuracy of the proton radius is the limiting factor when comparing experimental results to theoretical calculations. This method produces a proton radius of about 0.8768(69) fm , with approximately 1% relative uncertainty. [ 2 ]
Similar to Rutherford's scattering experiments that established the existence of the nucleus, modern electron–proton scattering experiments send beams of high energy electrons into 20cm long tube of liquid hydrogen. [ 10 ] The resulting angular distribution of the electron and proton are analyzed to produce a value for the proton charge radius.
Consistent with the spectroscopy method, this produces a proton radius of about 0.8775(5) fm . [ 11 ]
In 2010, Pohl et al. published the results of an experiment relying on muonic hydrogen as opposed to normal hydrogen. Conceptually, this is similar to the spectroscopy method. However, the much higher mass of a muon causes it to orbit 207 times closer than an electron to the hydrogen nucleus, where it is consequently much more sensitive to the size of the proton. The resulting radius was recorded as 0.842(1) fm , 5 standard deviations (5 σ ) smaller than the prior measurements. [ 2 ] The newly measured radius is 4% smaller than the prior measurements, which were believed to be accurate within 1%. (The new measurement's uncertainty limit of only 0.1% makes a negligible contribution to the discrepancy.) [ 12 ]
A follow-up experiment by Pohl et al. in August 2016 used a deuterium atom to create muonic deuterium and measured the deuteron radius. This experiment allowed the measurements to be 2.7 times more accurate, but also found a discrepancy of 7.5 standard deviations smaller than the expected value. [ 13 ] [ 14 ]
The anomaly remains unresolved and is an active area of research. There is as yet no conclusive reason to doubt the validity of the old data. [ 8 ] The immediate concern is for other groups to reproduce the anomaly. [ 8 ]
The uncertain nature of the experimental evidence has not stopped theorists from attempting to explain the conflicting results. Among the postulated explanations are the three-body force , [ 15 ] interactions between gravity and the weak force , or a flavour -dependent interaction, [ 16 ] [ 17 ] higher dimension gravity, [ 18 ] a new boson , [ 19 ] and the quasi-free π + hypothesis. [ 20 ]
Randolf Pohl, the original investigator of the puzzle, stated that while it would be "fantastic" if the puzzle led to a discovery, the most likely explanation is not new physics but some measurement artefact. His personal assumption is that past measurements have misgauged the Rydberg constant and that the current official proton size is inaccurate. [ 21 ]
In a paper by Belushkin et al. (2007), [ 22 ] including different constraints and perturbative quantum chromodynamics , a smaller proton radius than the then-accepted 0.877 femtometres was predicted. [ 22 ]
Papers from 2016 suggested that the problem was with the extrapolations that had typically been used to extract the proton radius from the electron scattering data [ 23 ] [ 24 ] [ 25 ] though these explanation would require that there was also a problem with the atomic Lamb shift measurements.
In one of the attempts to resolve the puzzle without new physics, Alarcón et al. (2018) [ 26 ] of Jefferson Lab have proposed that a different technique to fit the experimental scattering data, in a theoretically as well as analytically justified manner, produces a proton charge radius from the existing electron scattering data that is consistent with the muonic hydrogen measurement. [ 26 ] Effectively, this approach attributes the cause of the proton radius puzzle to a failure to use a theoretically motivated function for the extraction of the proton charge radius from the experimental data. Another recent paper has pointed out how a simple, yet theory-motivated change to previous fits will also give the smaller radius. [ 27 ]
In 2017 a new approach using a cryogenic hydrogen and Doppler-free laser excitation to prepare the source for spectroscopic measurements; this gave results ~5% smaller than the previously accepted spectroscopic values with much smaller statistical errors. [ 8 ] [ 28 ] [ 9 ] This result was close to the 2010 muon spectroscopy result. These authors suggest that the older spectroscopic analysis did not include quantum interference effects that alter the shape of the hydrogen lines.
In 2019, another experiment for the spectroscopy Lamb shift used a variation of Ramsey interferometry that does not require the Rydberg constant to analyze. Its result, 0.833 fm, agreed with the smaller 2010 value once more. [ 29 ] [ 9 ]
Also in 2019 W. Xiong et al. reported a similar result using extremely low momentum transfer electron scattering. [ 30 ]
Their results support the smaller proton charge radius, but do not explain why the results before 2010 came out larger. It is likely future experiments will be able to both explain and settle the proton radius puzzle. [ 31 ]
A re-analysis of experimental data, published in February 2022, found a result consistent with the smaller value of approximately 0.84 fm. [ 32 ] [ 33 ] | https://en.wikipedia.org/wiki/Proton_radius_puzzle |
The proton spin crisis (or proton spin puzzle ) is a theoretical crisis precipitated by a 1987 experiment by the European Muon Collaboration (EMC), [ 1 ] which tried to determine the distribution of spin within the proton . [ 2 ]
Physicists expected that the quarks carry all a proton's spin . However, not only was the total proton spin carried by quarks far smaller than 100%, these results were consistent with almost zero (4–24% [ 3 ] ) proton spin being carried by quarks. This surprising and puzzling result was termed the "proton spin crisis". [ 4 ] The problem is considered one of the important unsolved problems in physics . [ 5 ]
A key question is how the nucleons' spins are distributed amongst their constituent parts ( "partons" : quarks and gluons ). The components of the proton's spin are the expectation values of the individual sources of angular momentum. These values depend on the renormalization scale, because their operators are not separately conserved. [ 6 ] Physicists originally expected that valence quarks would carry all of the nucleon spin.
A proton is built from three valence quarks (two up quarks and one down quark ), virtual gluons, and virtual (or sea ) quarks and antiquarks (virtual particles do not influence the proton's quantum numbers). The ruling hypothesis was that since the proton is stable , it exists in the lowest possible energy level. Therefore, it was expected that the quark's wave function would be the spherically symmetric s-wave with no spatial contribution to angular momentum. The proton is, like each of its quarks, a spin- 1 / 2 particle (a fermion ). Therefore, it was hypothesized that two of the quarks would have their spins parallel and the third quark would have its spin antiparallel to that of the proton.
In this EMC experiment, a quark of a polarized proton target was hit by a polarized muon beam, and the quark's instantaneous spin was measured. In a polarized proton target, all the protons' spins take the same direction, and therefore it was expected that the spin of two out of the three quarks would cancel out and the spin of the third quark would be polarized in the direction of the proton's spin. Thus, the sum of the quarks' spin was expected to be equal to the proton's spin.
Instead, the experiment found that the number of quarks with spin in the proton's spin direction was almost the same as the number of quarks whose spin was in the opposite direction. This is the proton spin crisis. Similar results have been obtained in later experiments. [ 7 ]
A paper published in 2008 showed that more than half of the spin of the proton comes from the spin of its quarks, and that the missing spin is produced by the quarks' orbital angular momentum . [ 8 ] This work used relativistic effects together with other quantum chromodynamic properties and explained how they boil down to an overall spatial angular momentum that is consistent with the experimental data. A 2013 paper showed how to calculate the gluon helicity contribution using lattice QCD. [ 9 ]
According to physicist Xiangdong Ji in 2017, Lattice QCD shows "the theoretical expectation on the fraction of the nucleon spin carried in quark spin is about 30%. Thus there is no substantial discrepancy between the fundamental theory and data." [ 10 ]
Monte Carlo calculations have shown that 50% of the proton spin comes from gluon polarization. [ 11 ] Results from the RHIC , published in 2016, indicate that gluons may carry even more of protons' spin than quarks do. [ 12 ] However, in 2018 lattice QCD calculations indicated that it is the quark orbital angular momentum that is the dominant contribution to the nucleon spin. [ 13 ]
In a 2022 AAPPS Bulletin , Keh-Fei Liu calculated that quark spin contributes about 40% of the angular momentum, quark orbital angular momentum contributes about 15%, and gluon orbital angular momentum contributes about 40%. Given various error bars on both theoretical calculations and on experiments, this too is consistent with the observed experimental quark spin contribution of around 30%. [ 14 ] | https://en.wikipedia.org/wiki/Proton_spin_crisis |
Read the Wiktionary entry "proton transfer"
You can also: | https://en.wikipedia.org/wiki/Proton_transfer |
Proton tunneling is a type of quantum tunneling involving the instantaneous disappearance of a proton in one site and the appearance of the same proton at an adjacent site separated by a potential barrier. The two available sites are bounded by a double well potential of which its shape, width and height are determined by a set of boundary conditions. According to the WKB approximation , the probability for a particle to tunnel is inversely proportional to its mass and the width of the potential barrier. Electron tunneling is well-known. A proton is about 2000 times more massive than an electron , so it has a much lower probability of tunneling; nevertheless, proton tunneling still occurs especially at low temperatures and high pressures where the width of the potential barrier is decreased.
Proton tunneling is usually associated with hydrogen bonds . In many molecules that contain hydrogen, the hydrogen atoms are linked to two non-hydrogen atoms via a hydrogen bond at one end and a covalent bond at the other. A hydrogen atom without its electron is reduced to being a proton. Since the electron is no longer bound to the hydrogen atom in a hydrogen bond, this is equivalent to a proton resting in one of the wells of a double well potential as described above. When proton tunneling occurs, the hydrogen bond and covalent bonds are switched. Once proton tunneling occurs, the same proton has the same probability of tunneling back to its original site provided the double well potential is symmetrical.
The base pairs of a DNA strand are connected by hydrogen bonds . In essence, the genetic code is contained by a unique arrangement of hydrogen bonds. It is believed that upon the replication of a DNA strand there is a probability for proton tunneling to occur which changes the hydrogen bond configuration; this leads to a slight alteration of the hereditary code which is the basis of mutations. [ 1 ] Likewise, proton tunneling is also believed to be responsible for the occurrence of the dysfunction of cells (tumors and cancer) and ageing.
Proton tunneling occurs in many hydrogen based molecular crystals such as ice. It is believed that the phase transition between the hexagonal ( ice Ih ) and orthorhombic ( ice XI ) phases of ice is enabled by proton tunneling. [ 2 ] The occurrence of correlated proton tunneling in clusters of ice has also been reported recently. [ 3 ] [ 4 ] [ 5 ] | https://en.wikipedia.org/wiki/Proton_tunneling |
In chemistry, protonation (or hydronation ) is the adding of a proton (or hydron , or hydrogen cation), usually denoted by H + , to an atom , molecule , or ion , forming a conjugate acid . [ 1 ] (The complementary process, when a proton is removed from a Brønsted–Lowry acid , is deprotonation .) Some examples include
Protonation is a fundamental chemical reaction and is a step in many stoichiometric and catalytic processes . Some ions and molecules can undergo more than one protonation and are labeled polybasic, which is true of many biological macromolecules . Protonation and deprotonation (removal of a proton) occur in most acid–base reactions ; they are the core of most acid–base reaction theories. A Brønsted–Lowry acid is defined as a chemical substance that protonates another substance. Upon protonating a substrate, the mass and the charge of the species each increase by one unit, making it an essential step in certain analytical procedures such as electrospray mass spectrometry . Protonating or deprotonating a molecule or ion can change many other chemical properties, not just the charge and mass, for example solubility , hydrophilicity , reduction potential or oxidation potential , and optical properties can change.
Protonations are often rapid, partly because of the high mobility of protons in many solvents. The rate of protonation is related to the acidity of the protonating species: protonation by weak acids is slower than protonation of the same base by strong acids . The rates of protonation and deprotonation can be especially slow when protonation induces significant structural changes. [ 2 ]
Enantioselective protonations are under kinetic control, are of considerable interest in organic synthesis . They are also relevant to various biological processes. [ 3 ]
Protonation is usually reversible, and the structure and bonding of the conjugate base are normally unchanged on protonation. In some cases, however, protonation induces isomerization , for example cis - alkenes can be converted to trans -alkenes using a catalytic amount of protonating agent. Many enzymes, such as the serine hydrolases , operate by mechanisms that involve reversible protonation of substrates. [ citation needed ] | https://en.wikipedia.org/wiki/Protonation |
Protonolysis is the cleavage of a chemical bond by acids. Many examples are found in organometallic chemistry since the reaction requires polar M δ+ -R δ- bonds, where δ+ and δ- signify partial positive and negative charges associated with the bonding atoms. When compounds containing these bonds are treated with acid (HX), these bonds cleave:
Hydrolysis (X − = OH − ) is a special case of protonolysis. Compounds susceptible to hydrolysis often undergo protonolysis.
The borohydride anion is susceptible to reaction with even weak acids, resulting protonolysis of one or more B-H bonds. Protonolysis of sodium borohydride with acetic acid gives triacetoxyborohydride : [ 1 ]
Related reactions occur for hydrides of other electropositive elements, e.g. lithium aluminium hydride .
The alkyl derivatives of many metals undergo protonolysis. For the alkyls of very electropositive metals (zinc, magnesium, and lithium), water is sufficiently acidic, in which case the reaction is called hydrolysis. Protonolysis with mineral acids is sometimes used to remove organic ligands from a metal center. [ 2 ]
Inorganic materials with highly charged anions are often susceptible to protonolysis. Derivatives of nitride (N 3− ), phosphides (P 3− ), and silicides (Si 4− ) hydrolyze to give ammonia , phosphine , and silane . Analogous reactions occur with molecular compounds with M-NR 2 , M-PR 2 , and M-SiR 3 bonds. [ 3 ] | https://en.wikipedia.org/wiki/Protonolysis |
Protoplasm ( / ˈ p r oʊ t ə ˌ p l æ z əm / ; [ 1 ] [ 2 ] pl. protoplasms ) [ 3 ] is the part of a cell that is surrounded by a plasma membrane . It is a mixture of small molecules such as ions, monosaccharides , amino acids, and macromolecules such as proteins, polysaccharides, lipids, etc.
In some definitions, it is a general term for the cytoplasm (e.g., Mohl, 1846), [ 4 ] but for others, it also includes the nucleoplasm (e.g., Strasburger, 1882). For Sharp (1921), "According to the older usage the extra-nuclear portion of the protoplast [ the entire cell, excluding the cell wall ] was called "protoplasm," but the nucleus also is composed of protoplasm, or living substance in its broader sense. The current consensus is to avoid this ambiguity by employing Strasburger 's (1882) terms cytoplasm [ coined by Kölliker (1863), originally as synonym for protoplasm ] and nucleoplasm [ term coined by van Beneden (1875), or karyoplasm , used by Flemming (1878) ]." [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] The cytoplasm definition of Strasburger excluded the plastids ( Chromatoplasm ).
Like the nucleus, whether to include the vacuole in the protoplasm concept is controversial. [ 10 ]
Besides "protoplasm", many other related terms and distinctions were used for the cell contents over time. These were as follows: [ 11 ] [ 12 ]
The word "protoplasm" comes from the Greek protos for first , and plasma for thing formed , and was originally used in religious contexts. [ 39 ] It was used in 1839 by J. E. Purkinje for the material of the animal embryo. [ 15 ] [ 40 ] Later, in 1846 Hugo von Mohl redefined the term (also named as Primordialschlauch , "primordial utricle") to refer to the "tough, slimy, granular, semi-fluid" substance within plant cells, to distinguish this from the cell wall and the cell sap ( Zellsaft ) within the vacuole . [ 16 ] [ 41 ] [ 42 ] Max Schultze in 1861 proposed the "Protoplasm Doctrine" which states that all living cells are made of a living substance called Protoplasm . [ 43 ] Thomas Huxley (1869) later referred to it as the "physical basis of life" and considered that the property of life resulted from the distribution of molecules within this substance. [ 44 ] The protoplasm became an " epistemic thing ". [ 45 ] Its composition, however, was mysterious and there was much controversy over what sort of substance it was. [ 46 ]
In 1872, Beale created the vitalist term "bioplasm", to contrast with the materialism of Huxley. [ 24 ] [ 47 ] In 1880, term protoplast was proposed by Hanstein (1880) for the entire cell, excluding the cell wall, [ 48 ] [ 49 ] and some authors like Julius von Sachs (1882) preferred that name instead of cell. [ 50 ]
In 1965, Lardy introduced the term " cytosol ", later redefined to refer to the liquid inside cells. [ 38 ]
By the time Huxley wrote, a long-standing debate was largely settled over the fundamental unit of life: was it the cell or was it protoplasm? By the late 1860s, the debate was largely settled in favor of protoplasm. The cell was a container for protoplasm, the fundamental and universal material substance of life. Huxley's principal contribution was to establish protoplasm as incompatible with a vitalistic theory of life . [ 51 ] Attempts to investigate the origin of life through the creation of synthetic "protoplasm" in the laboratory were not successful. [ 52 ]
The idea that protoplasm of eukaryotes is simply divisible into a ground substance called "cytoplasm" and a structural body called the cell nucleus reflects the more primitive knowledge of cell structure that preceded the development of electron microscopy , when it seemed that cytoplasm was a homogeneous fluid and the existence of most sub-cellular compartments, or how cells maintain their shape, was unknown. [ 53 ] Today, it is known that the cell contents are structurally very complex and contain multiple organelles , the cytoskeleton and biomolecular condensates .the word protoplasm is mainly divided in to two parts cytoplasm and nucleus.
Protoplasm is physically translucent , granular slimy, semifluid or viscous . In it, granules of different shapes and sizes are suspended in solution. It may exist in two interchangeable states which are more liquid-like sol state and more solid-like gel state which is like jelly. The constituent molecules are free to move in sol state, while in gel state, the constituent molecules are compactly arranged. Protoplasm becomes opaque when it is heated. It also coagulates on heating. It occurs everywhere in the cell. [ 43 ] In eukaryotes , the portion of protoplasm surrounding the cell nucleus is known as the cytoplasm and the portion inside the nucleus as the nucleoplasm . In prokaryotes the material inside the plasma membrane is the bacterial cytoplasm, while in Gram-negative bacteria the region outside the plasma membrane but inside the outer membrane is the periplasm . [ 4 ]
There are about 30 elements, like carbon , hydrogen , oxygen , phosphorus , sulphur , calcium and many others which are identified in protoplasm of different cells. They form compounds, like water (65-80%), carbohydrates , ions , proteins , lipids , nucleic acids ( DNA and RNA ), fatty acids , glycerol , nucleotides , nucleosides and minerals . They are living as long as they are part of protoplasm. They are not able to perform functions of life independently. The composition of protoplasm is inconsistent and continuous changes take place in it. [ 43 ]
Some functions of protoplasm are: | https://en.wikipedia.org/wiki/Protoplasm |
Protoplast (from Ancient Greek πρωτόπλαστος ( prōtóplastos ) ' first-formed ' ), is a biological term coined by Hanstein in 1880 to refer to the entire cell, excluding the cell wall. [ 1 ] [ 2 ] Protoplasts can be generated by stripping the cell wall from plant , [ 3 ] bacterial , [ 4 ] [ 5 ] or fungal cells [ 5 ] [ 6 ] by mechanical, chemical or enzymatic means.
Protoplasts differ from spheroplasts in that their cell wall has been completely removed. [ 4 ] [ 5 ] Spheroplasts retain part of their cell wall. [ 7 ] In the case of Gram-negative bacterial spheroplasts, for example, the peptidoglycan component of the cell wall has been removed but the outer membrane component has not. [ 4 ] [ 5 ]
Cell walls are made of a variety of polysaccharides . Protoplasts can be made by degrading cell walls with a mixture of the appropriate polysaccharide-degrading enzymes :
During and subsequent to digestion of the cell wall, the protoplast becomes very sensitive to osmotic stress. This means cell wall digestion and protoplast storage must be done in an isotonic solution to prevent rupture of the plasma membrane . [ citation needed ]
Protoplasts can be used to study membrane biology, including the uptake of macromolecules and viruses . These are also used in somaclonal variation .
Protoplasts are widely used for DNA transformation (for making genetically modified organisms ), since the cell wall would otherwise block the passage of DNA into the cell. [ 3 ] In the case of plant cells, protoplasts may be regenerated into whole plants first by growing into a group of plant cells that develops into a callus and then by regeneration of shoots ( caulogenesis ) from the callus using plant tissue culture methods. [ 8 ] Growth of protoplasts into callus and regeneration of shoots requires the proper balance of plant growth regulators in the tissue culture medium that must be customized for each species of plant. [ 9 ] [ 10 ] Unlike protoplasts from vascular plants , protoplasts from mosses , such as Physcomitrella patens , do not need phytohormones for regeneration, nor do they form a callus during regeneration . Instead, they regenerate directly into the filamentous protonema , mimicking a germinating moss spore. [ 11 ]
Protoplasts may also be used for plant breeding , using a technique called protoplast fusion . Protoplasts from different species are induced to fuse by using an electric field or a solution of polyethylene glycol . [ 12 ] This technique may be used to generate somatic hybrids in tissue culture. [ citation needed ]
Additionally, protoplasts of plants expressing fluorescent proteins in certain cells may be used for Fluorescence Activated Cell Sorting (FACS), where only cells fluorescing a selected wavelength are retained. Among other things, this technique is used to isolate specific cell types (e.g., guard cells from leaves, pericycle cells from roots) for further investigations, such as transcriptomics. [ citation needed ] | https://en.wikipedia.org/wiki/Protoplast |
A protosteroid or primordial fat [ 1 ] is a lipid precursor , which can be transformed during subsequent biochemical reactions and finally become steroid . [ 2 ] The protosteroids are biomarkers that are produced by ancient eukaryotes belonged to the microorganisms in the protosterol biota . The intermediate compounds created by these eukaryotes while making crown sterol molecules . [ 3 ]
For the first time, the German biochemist and Nobel laureate Konrad Emil Bloch thought that instead of today's sterols , earlier life forms could have used chemical intermediates in their cells . He called these intermediates protosteroids. [ 4 ] Later researchers synthesized protosteroids called lanosterol , cycloartenol , and 24- methylene cycloartenol . Then researchers from the Australian National University and the University of Bremen [ 5 ] found protosteroids in rocks that formed 1.6 billion years ago in the Barney Creek Formation in Northern Australia . The researchers also found derivatives that matched the pattern produced by 24-methylene cycloartenol in 1.3-billion-year-old rocks. [ 6 ]
This paleontology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Protosteroid |
In pharmacology and pharmaceutics , a prototype drug is an individual drug that represents a drug class – group of medications having similar chemical structures , mechanism of action and mode of action . Prototypes are the most important, and typically the first developed drugs within the class, and are used as a reference to which all other drugs are compared. [ 1 ] [ 2 ] | https://en.wikipedia.org/wiki/Prototype_drug |
In mathematics , the Prouhet–Tarry–Escott problem asks for two disjoint multisets A and B of n integers each, whose first k power sum symmetric polynomials are all equal.
That is, the two multisets should satisfy the equations
for each integer i from 1 to a given k . It has been shown that n must be strictly greater than k . Solutions with k = n − 1 {\displaystyle k=n-1} are called ideal solutions . Ideal solutions are known for 3 ≤ n ≤ 10 {\displaystyle 3\leq n\leq 10} and for n = 12 {\displaystyle n=12} . No ideal solution is known for n = 11 {\displaystyle n=11} or for n ≥ 13 {\displaystyle n\geq 13} . [ 1 ]
This problem was named after Eugène Prouhet , who studied it in the early 1850s, and Gaston Tarry and Edward B. Escott, who studied it in the early 1910s. The problem originates from letters of Christian Goldbach and Leonhard Euler (1750/1751).
An ideal solution for n = 6 is given by the two sets { 0, 5, 6, 16, 17, 22 }
and { 1, 2, 10, 12, 20, 21 }, because:
For n = 12, an ideal solution is given by A = {±22, ±61, ±86, ±127, ±140, ±151} and B = {±35, ±47, ±94, ±121, ±146, ±148}. [ 2 ]
Prouhet used the Thue–Morse sequence to construct a solution with n = 2 k {\displaystyle n=2^{k}} for any k {\displaystyle k} . Namely, partition the numbers from 0 to 2 k + 1 − 1 {\displaystyle 2^{k+1}-1} into a) the numbers each with an even number of ones in its binary expansion and b) the numbers each with an odd number of ones in its binary expansion; then the two sets of the partition give a solution to the problem. [ 3 ] For instance, for n = 8 {\displaystyle n=8} and k = 3 {\displaystyle k=3} , Prouhet's solution is:
A higher dimensional version of the Prouhet–Tarry–Escott problem has been introduced and studied by Andreas Alpers and Robert Tijdeman in 2007: Given parameters n , k ∈ N {\displaystyle n,k\in \mathbb {N} } , find two different multi-sets { ( x 1 , y 1 ) , … , ( x n , y n ) } {\displaystyle \{(x_{1},y_{1}),\dots ,(x_{n},y_{n})\}} , { ( x 1 ′ , y 1 ′ ) , … , ( x n ′ , y n ′ ) } {\displaystyle \{(x_{1}',y_{1}'),\dots ,(x_{n}',y_{n}')\}} of points from Z 2 {\displaystyle \mathbb {Z} ^{2}} such that
for all d , j ∈ { 0 , … , k } {\displaystyle d,j\in \{0,\dots ,k\}} with j ≤ d . {\displaystyle j\leq d.} This problem is related to discrete tomography and also leads to special Prouhet-Tarry-Escott solutions over the Gaussian integers (though solutions to the Alpers-Tijdeman problem do not exhaust the Gaussian integer solutions to Prouhet-Tarry-Escott).
A solution for n = 6 {\displaystyle n=6} and k = 5 {\displaystyle k=5} is given, for instance, by:
No solutions for n = k + 1 {\displaystyle n=k+1} with k ≥ 6 {\displaystyle k\geq 6} are known. | https://en.wikipedia.org/wiki/Prouhet–Tarry–Escott_problem |
In mathematics , the Prouhet–Thue–Morse constant , named for Eugène Prouhet [ fr ] , Axel Thue , and Marston Morse , is the number—denoted by τ —whose binary expansion 0.01101001100101101001011001101001... is given by the Prouhet–Thue–Morse sequence . That is,
where t n is the n th element of the Prouhet–Thue–Morse sequence.
The Prouhet–Thue–Morse constant can also be expressed, without using t n , as an infinite product, [ 1 ]
This formula is obtained by substituting x = 1/2 into generating series for t n
The continued fraction expansion of the constant is [0; 2, 2, 2, 1, 4, 3, 5, 2, 1, 4, 2, 1, 5, 44, 1, 4, 1, 2, 4, 1, …] (sequence A014572 in the OEIS )
Yann Bugeaud and Martine Queffélec showed that infinitely many partial quotients of this continued fraction are 4 or 5, and infinitely many partial quotients are greater than or equal to 50. [ 2 ]
The Prouhet–Thue–Morse constant was shown to be transcendental by Kurt Mahler in 1929. [ 3 ]
He also showed that the number
is also transcendental for any algebraic number α, where 0 < | α | < 1.
Yann Bugaeud proved that the Prouhet–Thue–Morse constant has an irrationality measure of 2. [ 4 ]
The Prouhet–Thue–Morse constant appears in probability . If a language L over {0, 1} is chosen at random, by flipping a fair coin to decide whether each word w is in L , the probability that it contains at least one word for each possible length is [ 5 ]
This number theory -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Prouhet–Thue–Morse_constant |
The Prout is an obsolete unit of energy , whose value is:
1 P r o u t = 2.9638 × 10 − 14 J {\displaystyle 1Prout=2.9638\times 10^{-14}J}
This is equal to one twelfth of the binding energy of the deuteron .
The "Prout" is a unit of nuclear binding energy , and is 1/12 the binding energy of the deuteron , or 185.5 keV. It is named after William Prout . "Proutons" was an early candidate for the name of what are now called protons .
This article about energy , its collection, its distribution, or its uses is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Prout_(unit) |
Provability logic is a modal logic , in which the box (or "necessity") operator is interpreted as 'it is provable that'. The point is to capture the notion of a proof predicate of a reasonably rich formal theory , such as Peano arithmetic .
There are a number of provability logics, some of which are covered in the literature mentioned in § References . The basic system is generally referred to as GL (for Gödel – Löb ) or L or K4W ( W stands for well-foundedness ). It can be obtained by adding the modal version of Löb's theorem to the logic K (or K4 ).
Namely, the axioms of GL are all tautologies of classical propositional logic plus all formulas of one of the following forms:
And the rules of inference are:
The GL model was pioneered by Robert M. Solovay in 1976. Since then, until his death in 1996, the prime inspirer of the field was George Boolos . Significant contributions to the field have been made by Sergei N. Artemov , Lev Beklemishev, Giorgi Japaridze , Dick de Jongh , Franco Montagna, Giovanni Sambin, Vladimir Shavrukov, Albert Visser and others.
Interpretability logics and Japaridze's polymodal logic present natural extensions of provability logic.
This logic -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Provability_logic |
Provirus silencing, or proviral silencing, is the repression of expression of proviral genes in cells.
A provirus is a viral DNA that has been incorporated into the chromosome of a host cell, often by retroviruses such as HIV. [ 1 ] Endogenous retroviruses are always in the provirus state in the host cell and replicate through reverse transcription . By integrating their genome into the host cell genome, they make use of the host cell's transcription and translation mechanisms to achieve their own propagation. This often leads to harmful impact on the host. However, in recent gene therapy techniques, retroviruses are often used to deliver desired genes instead of their own viral genome into the host genome. As such, researchers are interested in the host cell's mechanisms to silence such gene expressions to find out firstly, how the host cell manages provirus transcription to eliminate the deleterious effects of retroviruses; and secondly, how can researchers ensure stable and long-term expression of retrovirus-mediated gene transfer .
It has been found that the level transcription of integrated retroviruses depends on both genetic and chromatin remodeling at the site of integration. Mechanisms such as DNA methylation and histone modification seem to play important roles in the suppression provirus transcription, such that proviral activity can be silenced. [ 2 ] The location of integration also plays a crucial role with the level of silencing that is observed. For example, integration into the H3K4me3 regions , areas of the genome that are twisted around histone H3 proteins that are tri-methylated at the 4th lysine residue. It has been reported that the manipulation or insertion of CpG dinucleotide islands can lead to the disruption of proviral silencing. [ 3 ] [ 4 ] [ 5 ] Silencing frequently begins with the binding of a zinc finger DNA-binding protein to the primer sequence, targeting more the expression of the provirus itself rather than attempting to curtail the sequence. [ 6 ] The protein then proceeds to recruit other enzymes that complete the silencing through DNA or histone methylation.
However, studies within the field do note that the patterns are species-specific with regards to the virus in question, thus caution should be taken when attempting to generalize to all cases. Additionally, many studies focus on proviral silencing within murine embryonic cells as opposed to human cells. Some researchers also posit that proviral silencing may be more complex than a simple question of whether the virus is repressed or not. [ 6 ] They suggest that proviruses played more of a role with transcriptional regulation as they integrated and evolved with the host sequence over time, occasionally serving as promoters or enhancers.
It has been shown that the orientation of proviruses can have dramatic effects on the expression of proviruses. [ 7 ] With regard to HIV-1, the viral genome is frequently inserted into the introns of active genes. Perhaps unsurprisingly, when the viral genome is oriented in the same direction as the host gene, expression is increased. The converse is also true, with genes that are oriented in the opposite direction of the gene showing reduced expression. This produces challenges for effective therapeutics to aid in treatment for the disease because it can lead to large variations in detectability. This can lead to struggles for physicians who are attempting to maintain HIV latency . HIV reservoirs, or cells that are infected with HIV but not actively producing viral particles, additionally contribute to this problem. CD4+ T cells are considered to be the main reservoir and are reported to have a half-life of over three years. [ 8 ] While the cells are effectively temporarily silencing the expression of HIV, this results in the condition being essentially impossible to eradicate.
Additionally, DNA methylation has been linked to aging and geriatric disease. Increases in DNA methylation have been linked to diseases including various types of cancer, Alzheimer's disease, Type 2 Diabetes, and cardiovascular disease. [ 9 ] From a proviral silencing standpoint, this does make logical sense as individuals would naturally accumulate more proviruses over their lifetimes. This does pose a slight concern because as groups have researched the utility of DNA methylation clocks to predict age, there is the risk that treatments which treat DNA methylation with the goal of reducing biological age inadvertently result in the increase of proviral expression within their patients. Additionally, it must be emphasized that most of the work in this field is correlational rather than causational.
The expression of transgenes is often hindered by mechanisms associated with proviral silencing. This naturally proves to be an issue when attempting to create longer-lasting gene therapies or transgenic cell lines. Most methods center around choosing a specific locus of integration.
Recently, researchers have demonstrated that targeted integration of a lentiviral payload using homology-directed repair can result in stable integration and expression. In this approach, CRISPR-associated ribonucleoprotein complexes (CRISPR RNP complexes) are used to create double-stranded breaks upstream of an endogenously promoted essential gene. [ 10 ] The payload is designed to where it contains the transgene flanked by two regions of DNA that are homologous/identical to the regions upstream of the gene, enabling it to integrate in the same reading frame as the gene. This approach is similar to other strategies that seek to integrate in areas that are less susceptible to silencing through more mechanistic methods. | https://en.wikipedia.org/wiki/Provirus_silencing |
A provitamin is a substance that may be converted within the body to a vitamin . [ 1 ] The term previtamin is a synonym. [ 2 ]
The term "provitamin" is used when it is desirable to label a substance with little or no vitamin activity, but which can be converted to an active form by normal metabolic processes. [ citation needed ]
Some provitamins are:
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Provitamin |
A proximate cause is an event which is closest to, or immediately responsible for causing , some observed result. This exists in contrast to a higher-level ultimate cause (or distal cause ) which is usually thought of as the "real" reason something occurred.
The concept is used in many fields of research and analysis, including data science and ethology .
In most situations, an ultimate cause may itself be a proximate cause in comparison to a further ultimate cause. Hence we can continue the above example as follows:
Although the behavior in these two examples is the same, the explanations are based on different sets of factors incorporating evolutionary versus physiological factors.
These can be further divided, for example proximate causes may be given in terms of local muscle movements or in terms of developmental biology (see Tinbergen's four questions ).
In analytic philosophy , notions of cause adequacy are employed in the causal model . In order to explain the genuine cause of an effect, one would have to satisfy adequacy conditions, which include, among others, the ability to distinguish between:
One famous example of the importance of this is the Duhem–Quine thesis , which demonstrates that it is impossible to test a hypothesis in isolation, because an empirical test of the hypothesis requires one or more background assumptions. One way to solve this issue is to employ contrastive explanations. Several philosophers of science, such as Lipton , argue that contrastive explanations are able to detect genuine causes. [ 1 ] An example of a contrastive explanation is a cohort study that includes a control group, where one can determine the cause from observing two otherwise identical samples. This view also circumvents the problem of infinite regression of "why" questions that proximate causes create.
Sociologists use the related pair of terms "proximal causation" and "distal causation".
Proximal causation : explanation of human social behaviour by considering the immediate factors, such as symbolic interaction , understanding (Verstehen) , and individual milieu that influence that behaviour. Most sociologists recognize that proximal causality is the first type of power humans experience; however, while factors such as family relationships may initially be meaningful, they are not as permanent, underlying, or determining as other factors such as institutions and social networks (Naiman 2008: 5).
Distal causation : explanation of human social behaviour by considering the larger context in which individuals carry out their actions. Proponents of the distal view of power argue that power operates at a more abstract level in the society as a whole (e.g. between economic classes) and that "all of us are affected by both types of power throughout our lives" (ibid). Thus, while individuals occupy roles and statuses relative to each other, it is the social structure and institutions in which these exist that are the ultimate cause of behaviour. A human biography can only be told in relation to the social structure, yet it also must be told in relation to unique individual experiences in order to reveal the complete picture (Mills 1959). | https://en.wikipedia.org/wiki/Proximate_and_ultimate_causation |
Proximity communication is a Sun microsystems technology of wireless chip -to-chip communications. Partly by Robert Drost and Ivan Sutherland . Research done as part of High Productivity Computing Systems DARPA project.
Proximity communication replaces wires by capacitive coupling, promises significant increase in communications speed between chips in an electronic system, among other benefits. [ 1 ] Partially funded by a $50 million award from the Defense Advanced Research Projects Agency.
Comparing traditional area ball bonding, proximity communication has one order smaller scale, so it can be two order denser (in terms of connection number/PIN) than ball bonding. This technique requires very good alignment between chips and very small gaps between transmitting (Tx) and receiving (Rx) parts (2-3 micrometers), which can be destroyed by thermal expansion, vibration, dust, etc.
Chip transmitter consists (according to presentation slide) of big 32x32 array of very small Tx micropads, 4x4 array of bigger Rx micropads (four times bigger than tx micropad), and two linear arrays of 14 X vernier and 14 Y vernier.
Proximity communication can be used with 3D packing on chips in Multi-Chip Module , allowing to connect several MCM without sockets and wires.
Speed was up to 1.35 Gbit/s/channel in tests of 16 channel systems. BER < 10 −12 . Static power is 3.6 mW/channel, dynamic power is 3.9 pJ/bit.
This article about wireless technology is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Proximity_communication |
Proximity effect or Holm–Meissner effect is a term used in the field of superconductivity to describe phenomena that occur when a superconductor (S) is placed in contact with a "normal" (N) non-superconductor. Typically the critical temperature T c {\displaystyle T_{c}} of the superconductor is suppressed and signs of weak superconductivity are observed in the normal material over mesoscopic distances. The proximity effect is known since the pioneering work by R. Holm and W. Meissner. [ 1 ] They have observed zero resistance in SNS pressed contacts, in which two superconducting metals are separated by a thin film of a non-superconducting (i.e. normal) metal. The discovery of the supercurrent in SNS contacts is sometimes mistakenly attributed to Brian Josephson 's 1962 work, yet the effect was known long before his publication and was understood as the proximity effect. [ 2 ]
Electrons in the superconducting state of a superconductor are ordered in a very different way than in a normal metal, i.e. they are paired into Cooper pairs . Furthermore, electrons in a material cannot be said to have a definitive position because of the momentum-position complementarity . In solid state physics one generally chooses a momentum space basis, and all electron states are filled with electrons until the Fermi surface in a metal, or until the gap edge energy in the superconductor.
Because of the nonlocality of the electrons in metals, the properties of those electrons cannot change infinitely quickly. In a superconductor, the electrons are ordered as superconducting Cooper pairs; in a normal metal, the electron order is gapless (single-electron states are filled up to the Fermi surface ). If the superconductor and normal metal are brought together, the electron order in the one system cannot infinitely abruptly change into the other order at the border. Instead, the paired state in the superconducting layer is carried over to the normal metal, where the pairing is destroyed by scattering events, causing the Cooper pairs to lose their coherence. For very clean metals, such as copper , the pairing can persist for hundreds of microns.
Conversely, the (gapless) electron order present in the normal metal is also carried over to the superconductor in that the superconducting gap is lowered near the interface.
The microscopic model describing this behavior in terms of single electron processes is called Andreev reflection . It describes how electrons in one material take on the order of the neighboring layer by taking into account interface transparency and the states (in the other material) from which the electrons can scatter.
As a contact effect, the proximity effect is closely related to thermoelectric phenomena like the Peltier effect or the formation of pn junctions in semiconductors . The proximity effect enhancement of T c {\displaystyle T_{c}} is largest when the normal material is a metal with a large diffusivity rather than an insulator (I). Proximity-effect suppression of T c {\displaystyle T_{c}} in a spin-singlet superconductor is largest when the normal material is ferromagnetic, as the presence of the internal magnetic field weakens superconductivity ( Cooper pairs breaking).
The study of S/N, S/I and S/S' (S' is lower superconductor) bilayers and multilayers has been a particularly active area of superconducting proximity effect research. The behavior of the compound structure in the direction parallel to the interface differs from that perpendicular to the interface. In type II superconductors exposed to a magnetic field parallel to the interface, vortex defects will preferentially nucleate in the N or I layers and a discontinuity in behavior is observed when an increasing field forces them into the S layers. In type I superconductors, flux will similarly first penetrate N layers. Similar qualitative changes in behavior do not occur when a magnetic field is applied perpendicular to the S/I or S/N interface. In S/N and S/I multilayers at low temperatures, the long penetration depths and coherence lengths of the Cooper pairs will allow the S layers to maintain a mutual, three-dimensional quantum state. As temperature is increased, communication between the S layers is destroyed resulting in a crossover to two-dimensional behavior. The anisotropic behavior of S/N, S/I and S/S' bilayers and multilayers has served as a basis for understanding the far more complex critical field phenomena observed in the highly anisotropic cuprate high-temperature superconductors .
Recently the Holm–Meissner proximity effect was observed in graphene by the Morpurgo research group. [ 3 ] The experiments have been done on nanometer scale devices made of single graphene layers with superimposed superconducting electrodes made of 10 nm Titanium and 70 nm Aluminum films. Aluminum is a superconductor, which is responsible for inducing superconductivity into graphene. The distance between the electrodes was in the range between 100 nm and 500 nm. The proximity effect is manifested by observations of a supercurrent, i.e. a current flowing through the graphene junction with zero voltage on the junction. By using the gate electrodes the researches have shown that the proximity effect occurs when the carriers in the graphene are electrons as well as when the carriers are holes. The critical current of the devices was above zero even at the Dirac point.
Here is shown, that a quantum vortex with a well-defined core can exist in a rather thick normal metal, proximized with a superconductor. [ 4 ] | https://en.wikipedia.org/wiki/Proximity_effect_(superconductivity) |
Enzyme-catalyzed proximity labeling ( PL ), also known as proximity-based labeling , is a laboratory technique that labels biomolecules , usually proteins or RNA , proximal to a protein of interest. [ 1 ] By creating a gene fusion in a living cell between the protein of interest and an engineered labeling enzyme, biomolecules spatially proximal to the protein of interest can then be selectively marked with biotin for pulldown and analysis. Proximity labeling has been used for identifying the components of novel cellular structures and for determining protein-protein interaction partners, among other applications. [ 2 ]
Before the development of proximity labeling, determination of protein proximity in cells relied on studying protein-protein interactions through methods such as affinity purification-mass spectrometry and proximity ligation assays . [ 3 ]
DamID is a method developed in 2000 by Steven Henikoff for identifying parts of the genome proximal to a chromatin protein of interest. DamID relies on a DNA methyltransferase fusion to the chromatin protein to nonnaturally methylate DNA, which can then be subsequently sequenced to reveal genome methylation sites near the protein. [ 4 ] Researchers were guided by the fusion protein strategy of DamID to create a method for site-specific labeling of protein targets, culminating in the creation of the biotin protein labelling-based BioID in 2012. [ 1 ] Alice Ting and the Ting lab at Stanford University have engineered several proteins that demonstrate improvements in biotin-based proximity labeling efficacy and speed. [ 5 ] [ 6 ] [ 7 ] [ 8 ]
Proximity labeling relies on a labeling enzyme that can biotinylate nearby biomolecules promiscuously. Biotin labeling can be achieved through several different methods, depending on the species of labeling enzyme.
To label proteins nearby a protein of interest, a typical proximity labeling experiment begins by cellular expression of an APEX2 fusion to the protein of interest, which localizes to the protein of interest's native environment. Cells are next incubated with biotin-phenol, then briefly with hydrogen peroxide, initiating biotin-phenol free radical generation and labeling. To minimize cellular damage, the reaction is then quenched using an antioxidant buffer. Cells are lysed and the labeled proteins are pulled down with streptavidin beads. The proteins are digested with trypsin , and finally the resulting peptidic fragments are analyzed using shotgun proteomics methods such as LC-MS/MS or SPS-MS 3 . [ 8 ]
If instead a protein fusion is not genetically accessible (such as in human tissue samples) but an antibody for the protein of interest is known, proximity labeling can still be enabled by fusing a labeling enzyme with the antibody, then incubating the fusion with the sample. [ 9 ] [ 10 ]
Proximity labeling methods have been used to study the proteomes of biological structures that are otherwise difficult to isolate purely and completely, such as cilia , [ 11 ] mitochondria , [ 6 ] postsynaptic clefts , [ 2 ] p-bodies , stress granules , [ 12 ] and lipid droplets . [ 13 ]
Fusion of APEX2 with G-protein coupled receptors (GPCRs) allows for both tracking GPCR signaling at a 20-second temporal resolution [ 14 ] and also identification of unknown GPCR-linked proteins. [ 15 ]
Proximity labeling has also been used for transcriptomics and interactomics . In 2019, Alice Ting and the Ting lab have used APEX to identify RNA localized to specific cellular compartments. [ 16 ] [ 17 ] In 2019, BioID has been tethered to the beta-actin mRNA transcript to study its localization dynamics. [ 18 ] Proximity labeling has also been used to find interaction partners of heterodimeric protein phosphatases , of the miRISC (microRNA-induced silencing complex) protein Ago2 , and of ribonucleoproteins . [ 3 ]
TurboID-based proximity labeling has been used to identify regulators of a receptor involved in the innate immune response , a NOD-like receptor . [ 19 ] BioID-based proximity labeling has been used to identify the molecular composition of breast cancer cell invadopodia , which are important for metastasis. [ 20 ] Biotin-based proximity labeling studies demonstrate increased protein tagging of intrinsically disordered regions , suggesting that biotin-based proximity labeling can be used to study the roles of IDRs. [ 21 ] A photosensitizer nucleus-targeted small molecule has also been developed for photoactivatable proximity labeling. [ 22 ]
A new frontier in the field of proximity labeling exploits the utility of photocatalysis to achieve high spatial and temporal resolution of proximal protein microenvironments. [ 23 ] This photocatalytic technology leverages the photonic energy of iridium-based photocatalysts to activate diazirine probes that can tag proximal proteins within a tight radius of about four nanometers. [ 24 ] This technology was developed by the Merck Exploratory Science Center in collaboration with researchers at Princeton University . [ 24 ] This technology was spun out of the Merck Exploratory Science Center into InduPro, a biotech company founded in 2022 by three Merck scientists, including Rob Oslund and Niyi Fadeyi, who co-invented the mapping technology. [ 25 ] [ 26 ] | https://en.wikipedia.org/wiki/Proximity_labeling |
Proximity ligation assay ( in situ PLA ) is a technology that extends the capabilities of traditional immunoassays to include direct detection of proteins , protein interactions , extracellular vesicles and post translational modifications with high specificity and sensitivity. [ 1 ] [ 2 ] Protein targets can be readily detected and localized with single molecule resolution and objectively quantified in unmodified cells and tissues. Utilizing only a few cells, sub-cellular events, even transient or weak interactions, are revealed in situ and sub-populations of cells can be differentiated. Within hours, results from conventional co-immunoprecipitation and co-localization techniques can be confirmed. [ 3 ]
Two primary antibodies raised in different species recognize the target antigen on the proteins of interest (Figure 1). Secondary antibodies (2 o Ab) directed against the constant regions of the different primary antibodies, called PLA probes, bind to the primary antibodies (Figure 2).
Each of the PLA probes has a short sequence specific DNA strand attached to it. If the PLA probes are in proximity (that is, if the two original proteins of interest are in proximity, or part of a protein complex, as shown in the figures), the DNA strands can participate in rolling circle DNA synthesis upon addition of two other sequence-specific DNA oligonucleotides together with appropriate substrates and enzymes (Figure 3).
The DNA synthesis reaction results in several-hundredfold amplification of the DNA circle. Next, fluorescent-labeled complementary oligonucleotide probes are added, and they bind to the amplified DNA (Figure 4). The resulting high concentration of fluorescence is easily visible as a distinct bright spot when viewed with a fluorescence microscope . [ 4 ] In the specific case shown (Figure 5), the nucleus is enlarged because this is a B-cell lymphoma cell. The two proteins of interest are a B cell receptor and MYD88 . The finding of interaction in the cytoplasm was interesting because B cell receptors are thought of as being located in the cell membrane. [ 5 ]
PLA as described above has been used to study aspects of animal development [ 6 ] [ 7 ] and breast cancer [ 8 ] among many other topics. In situ proximity ligation assays (isPLA) has been applied to antibody validation in human tissues with various advantages over IHC, including increased detection specificity, decreased unspecific staining, and better localization. [ 9 ] A variation of the technique (rISH-PLA) has been used to study the association of protein and RNA . [ 10 ] Another variation of in situ PLA includes a multiplex PLA assay that makes it possible to visualize multiple protein complexes in parallel. [ 11 ] PLA can also be combined with other read out forms such as ELISA , [ 12 ] flow cytometry . [ 13 ] [ 14 ] and Western blotting [ 15 ] | https://en.wikipedia.org/wiki/Proximity_ligation_assay |
Prp24 ( p recursor R NA p rocessing, gene 24 ) is a protein part of the pre-messenger RNA splicing process and aids the binding of U6 snRNA to U4 snRNA during the formation of spliceosomes . Found in eukaryotes from yeast to E. coli , fungi , and humans, Prp24 was initially discovered to be an important element of RNA splicing in 1989. [ 1 ] [ 2 ] Mutations in Prp24 were later discovered in 1991 to suppress mutations in U4 that resulted in cold-sensitive strains of yeast, indicating its involvement in the reformation of the U4/U6 duplex after the catalytic steps of splicing. [ 3 ]
The process of spliceosome formation involves the U4 and U6 snRNPs associating and forming a di-snRNP in the cell nucleus . This di-snRNP then recruits another member ( U5 ) to become a tri-snRNP. U6 must then dissociate from U4 to bond with U2 and become catalytically active. Once splicing has been done, U6 must dissociate from the spliceosome and bond back with U4 to restart the cycle.
Prp24 has been shown to promote the binding of U4 and U6 snRNPs. Removing Prp24 results in the accumulation of free U4 and U6, and the subsequent addition of Prp24 regenerates U4/U6 and reduces the amount of free U4 and U6. [ 4 ] Naked U6 snRNA is very compact and has little room to form base pairs with other RNA. However, when U6 snRNP associates with proteins such as Prp24, the structure is much more open, thus facilitating the binding to U4. [ 5 ] Prp24 is not present in the U6/U4 duplex itself, and it has been suggested that Prp24 must leave the complex in order for proper base pairs to be formed. [ 6 ] [ 7 ] It has also been suggested that Prp24 may play a role in destabilizing U4/U6 in order for U6 to pair bases with U2. [ 8 ]
Prp24 has a molecular weight of 50 kDa and has been shown to contain four RNA recognition motifs (RRMs) and a conserved 12-amino acid sequence at the C-terminus . [ 9 ] [ 10 ] RRMs 1 and 2 have been shown to be important for high- affinity binding of U6, while RRMs 3 and 4 bind at lower affinity sites on U6. [ 11 ] The first three RRMs interact extensively with each other and contain canonical folds that contain a four-stranded beta-sheet and two alpha-helices . The electropositive surface of RRMs 1 and 2 is a RNA annealing domain while the cleft between RRMs 1 and 2 including the beta-sheet face of RRM2 is a sequence-specific RNA binding site. [ 1 ] The C-terminal motif is required for association with LSm proteins and contributes to substrate (U6) binding and not the catalytic rate of splicing. [ 10 ]
Prp24 interacts with the U6 snRNA via its RRMs. It has been shown through chemical modification testing that nucleotides 39–57 of U6 (40–43 in particular) [ 5 ] are involved in binding Prp24. [ 12 ]
The LSm proteins are in a consistent configuration on the U6 RNA. [ 9 ] [ clarification needed ] It has been proposed that the LSm proteins and Prp24 interact both physically and functionally [ 6 ] and the C-terminal motif of Prp24 is important for this interaction. [ 10 ] The binding of Prp24 to U6 is enhanced by the binding of Lsm proteins to U6, as is binding of U4 and U6. [ 13 ] It was revealed by electron microscopy that Prp24 may interact with the LSm protein ring at LSm2. [ 9 ]
Prp24 has a human homolog , SART3 . SART3 is a tumor rejection antigen (SART3 stands for " s quamous cell carcinoma a ntigen r ecognized by T cells, gene 3 ). The RRMs 1 and 2 in yeast are similar to RRMs in human SART3. [ 1 ] [ 11 ] The C-terminal domain is also highly conserved from yeast to humans. [ 14 ] This protein, like Prp24, interacts with the LSm proteins [ 9 ] [ 15 ] for the recycling of U6 into the U4/U6 snRNP. It has been proposed that SART3 target U6 to a Cajal body or a nuclear inclusion as the site of assembly of the U4/U6 snRNP. [ 15 ] SART3 is located on chromosome 12 , and a mutation is likely the cause of disseminated superficial actinic porokeratosis . [ 16 ] | https://en.wikipedia.org/wiki/Prp24 |
Integration is the basic operation in integral calculus . While differentiation has straightforward rules by which the derivative of a complicated function can be found by differentiating its simpler component functions, integration does not, so tables of known integrals are often useful. This page lists some of the most common antiderivatives .
A compilation of a list of integrals (Integraltafeln) and techniques of integral calculus was published by the German mathematician Meier Hirsch [ de ] (also spelled Meyer Hirsch) in 1810. [ 1 ] These tables were republished in the United Kingdom in 1823. More extensive tables were compiled in 1858 by the Dutch mathematician David Bierens de Haan for his Tables d'intégrales définies , supplemented by Supplément aux tables d'intégrales définies in ca. 1864. A new edition was published in 1867 under the title Nouvelles tables d'intégrales définies .
These tables, which contain mainly integrals of elementary functions, remained in use until the middle of the 20th century. They were then replaced by the much more extensive tables of Gradshteyn and Ryzhik . In Gradshteyn and Ryzhik, integrals originating from the book by Bierens de Haan are denoted by BI.
Not all closed-form expressions have closed-form antiderivatives; this study forms the subject of differential Galois theory , which was initially developed by Joseph Liouville in the 1830s and 1840s, leading to Liouville's theorem which classifies which expressions have closed-form antiderivatives. A simple example of a function without a closed-form antiderivative is e − x 2 , whose antiderivative is (up to constants) the error function .
Since 1968 there is the Risch algorithm for determining indefinite integrals that can be expressed in term of elementary functions , typically using a computer algebra system . Integrals that cannot be expressed using elementary functions can be manipulated symbolically using general functions such as the Meijer G-function .
More detail may be found on the following pages for the lists of integrals :
Gradshteyn , Ryzhik , Geronimus , Tseytlin , Jeffrey, Zwillinger, and Moll 's (GR) Table of Integrals, Series, and Products contains a large collection of results. An even larger, multivolume table is the Integrals and Series by Prudnikov , Brychkov , and Marichev (with volumes 1–3 listing integrals and series of elementary and special functions , volume 4–5 are tables of Laplace transforms ). More compact collections can be found in e.g. Brychkov, Marichev, Prudnikov's Tables of Indefinite Integrals , or as chapters in Zwillinger's CRC Standard Mathematical Tables and Formulae or Bronshtein and Semendyayev 's Guide Book to Mathematics , Handbook of Mathematics or Users' Guide to Mathematics , and other mathematical handbooks.
Other useful resources include Abramowitz and Stegun and the Bateman Manuscript Project . Both works contain many identities concerning specific integrals, which are organized with the most relevant topic instead of being collected into a separate table. Two volumes of the Bateman Manuscript are specific to integral transforms.
There are several web sites which have tables of integrals and integrals on demand. Wolfram Alpha can show results, and for some simpler expressions, also the intermediate steps of the integration. Wolfram Research also operates another online service, the Mathematica Online Integrator.
C is used for an arbitrary constant of integration that can only be determined if something about the value of the integral at some point is known. Thus, each function has an infinite number of antiderivatives .
These formulas only state in another form the assertions in the table of derivatives .
When there is a singularity in the function being integrated such that the antiderivative becomes undefined at some point (the singularity), then C does not need to be the same on both sides of the singularity. The forms below normally assume the Cauchy principal value around a singularity in the value of C , but this is not necessary in general. For instance, in ∫ 1 x d x = ln | x | + C {\displaystyle \int {1 \over x}\,dx=\ln \left|x\right|+C} there is a singularity at 0 and the antiderivative becomes infinite there. If the integral above were to be used to compute a definite integral between −1 and 1, one would get the wrong answer 0. This however is the Cauchy principal value of the integral around the singularity. If the integration is done in the complex plane the result depends on the path around the origin, in this case the singularity contributes − i π when using a path above the origin and i π for a path below the origin. A function on the real line could use a completely different value of C on either side of the origin as in: [ 2 ] ∫ 1 x d x = ln | x | + { A if x > 0 ; B if x < 0. {\displaystyle \int {1 \over x}\,dx=\ln |x|+{\begin{cases}A&{\text{if }}x>0;\\B&{\text{if }}x<0.\end{cases}}}
The following function has a non-integrable singularity at 0 for n ≤ −1 :
Let f be a continuous function , that has at most one zero . If f has a zero, let g be the unique antiderivative of f that is zero at the root of f ; otherwise, let g be any antiderivative of f . Then ∫ | f ( x ) | d x = sgn ( f ( x ) ) g ( x ) + C , {\displaystyle \int \left|f(x)\right|\,dx=\operatorname {sgn}(f(x))g(x)+C,} where sgn( x ) is the sign function , which takes the values −1, 0, 1 when x is respectively negative, zero or positive.
This can be proved by computing the derivative of the right-hand side of the formula, taking into account that the condition on g is here for insuring the continuity of the integral.
This gives the following formulas (where a ≠ 0 ), which are valid over any interval where f is continuous (over larger intervals, the constant C must be replaced by a piecewise constant function):
If the function f does not have any continuous antiderivative which takes the value zero at the zeros of f (this is the case for the sine and the cosine functions), then sgn( f ( x )) ∫ f ( x ) dx is an antiderivative of f on every interval on which f is not zero, but may be discontinuous at the points where f ( x ) = 0 . For having a continuous antiderivative, one has thus to add a well chosen step function . If we also use the fact that the absolute values of sine and cosine are periodic with period π , then we get:
Ci , Si : Trigonometric integrals , Ei : Exponential integral , li : Logarithmic integral function , erf : Error function
There are some functions whose antiderivatives cannot be expressed in closed form . However, the values of the definite integrals of some of these functions over some common intervals can be calculated. A few useful integrals are given below.
If the function f has bounded variation on the interval [ a , b ] , then the method of exhaustion provides a formula for the integral: ∫ a b f ( x ) d x = ( b − a ) ∑ n = 1 ∞ ∑ m = 1 2 n − 1 ( − 1 ) m + 1 2 − n f ( a + m ( b − a ) 2 − n ) . {\displaystyle \int _{a}^{b}{f(x)\,dx}=(b-a)\sum \limits _{n=1}^{\infty }{\sum \limits _{m=1}^{2^{n}-1}{\left({-1}\right)^{m+1}}}2^{-n}f(a+m\left({b-a}\right)2^{-n}).}
The " sophomore's dream ": ∫ 0 1 x − x d x = ∑ n = 1 ∞ n − n ( = 1.29128 59970 6266 … ) ∫ 0 1 x x d x = − ∑ n = 1 ∞ ( − n ) − n ( = 0.78343 05107 1213 … ) {\displaystyle {\begin{aligned}\int _{0}^{1}x^{-x}\,dx&=\sum _{n=1}^{\infty }n^{-n}&&(=1.29128\,59970\,6266\dots )\\[6pt]\int _{0}^{1}x^{x}\,dx&=-\sum _{n=1}^{\infty }(-n)^{-n}&&(=0.78343\,05107\,1213\dots )\end{aligned}}} attributed to Johann Bernoulli . | https://en.wikipedia.org/wiki/Prudnikov,_Brychkov_and_Marichev |
The Prusa Mini , stylized as the Original Prusa MINI , is an open-source fused deposition modeling 3D printer that is manufactured by the Czech company Prusa Research . [ 1 ] [ 2 ] The printer is the lowest cost machine produced by Prusa Research and is designed as a first printer or as part of a 'print farm'. [ 1 ] [ 3 ] [ 4 ]
The Prusa Mini was officially launched in October 2019. [ 5 ] The printer is available either assembled or as a kit. The build volume is 180 x 180 x 180 mm, and the print is performed on a spring steel sheet which meant to be easy to remove. [ 2 ] Minimum layer resolution is 50 micrometers, and the maximum travel speed is 200 millimeters per second. The printer has an LCD color display (non-touch), is able to print via USB drives . It has a custom 32-bit mainboard and a built-in online firmware updater. The printer has sensorless homing using Trinamic 2209 drivers and has a custom hot end which supports E3D nozzles. [ 6 ] [ 7 ]
It has several safety features including three thermistors to detect thermal runaway.
In November 2020, the Prusa Mini was replaced by the Mini+, which featured a few small updates meant to ease assembly and maintenance. [ 8 ] One of the changes was a new mesh bed levelling sensor called "SuperPINDA" which replaced the previous "MINDA" sensor, and it is claimed by the manufacturer that this should result in a more consistent calibration of the first print layer in particular. [ citation needed ] The Mini+ filament sensor is an optional extra.
In September 2023, Prusa Research announced that upcoming Mini and Mini+ firmware would include network remote management using the PrusaConnect service, and input shaping for faster printing with no physical changes to the printer needed. [ 9 ]
The printer is the first open source hardware product to require a user wishing to use unsigned firmware to physically break off a piece of the PCB , voiding the printer's warranty, before it can be flashed onto the board. [ 10 ] This is intended to reduce Prusa's liability should someone create custom firmware with potential to cause harm (such as disabling thermal runaway protections or other safety features). [ 10 ]
Prusa Mini was selected as The Best 3D Printer by The Wirecutter in 2021, and continued to feature until 2023. [ 11 ] | https://en.wikipedia.org/wiki/Prusa_Mini |
The Prusa i3 is a family of fused filament fabrication 3D printers , manufactured by Czech company Prusa Research under the trademarked name Original Prusa i3 . Part of the RepRap project , Prusa i3 printers were called the most used 3D printer in the world in 2016. [ 1 ] The first Prusa i3 was designed by Josef Průša in 2012, and was released as a commercial kit product in 2015. The latest model (Prusa MK4S on sale as of August 2024) is available in both kit and factory assembled versions. The Prusa i3's comparable low cost and ease of construction and modification made it popular in education and with hobbyists and professionals, with the Prusa i3 model MK2 printer receiving several awards in 2016. [ 2 ]
The i3 series is released under an open source license, which has led to many other companies and individuals producing variants and clones of the design. The i3 moniker refers to the printer being the third iteration of the design. [ 3 ] It was used up until the Prusa i3 MK3 and its variants but was dropped from the model, Prusa MK4.
First conceived in 2009, RepRap Mendel 3D printers were designed to be assembled from 3D printed parts and commonly available off-the-shelf components (referred to as "vitamins," as they cannot be produced by the printer itself). [ 4 ] [ 5 ] These parts include threaded rods , leadscrews , smooth rods and bearings, screws, nuts, stepper motors , control circuit boards , and a "hot end" to melt and place thermoplastic materials. [ 6 ] A Cartesian mechanism permits placement of material anywhere in a cubic volume; this design has continued throughout development of the i3 series. The flat "print bed" (the surface on which parts are printed) is movable in one axis (Y), while two horizontal and two vertical rods permit tool motion in two axes, designated X and Z respectively.
Josef Průša was a core developer of the RepRap project who had previously developed a PCB heated "print bed". He adapted and simplified the RepRap Mendel design, reducing the time to print 3D plastic parts from 20 to 10 hours, changing to the use of two Z-axis motors to simplify the frame, and including 3D printed bushings in place of regular bearings. [ 7 ] [ 8 ] First announced in September 2010, the printer was dubbed Prusa Mendel by Průša himself. [ 9 ] According to the RepRap wiki, "Prusa Mendel is the Ford Model T of 3D printers." [ 10 ] [ 11 ]
Průša streamlined his Mendel design, releasing "Prusa Iteration 2" in November 2011. Parts changes allowed for snap-fit assembly (no glue required); fewer tools were needed to construct and maintain this version. Although not required, fine-pitch manufactured pulleys and LM8UU linear bearings were recommended over printed equivalents for "professional" results. [ 12 ] [ 13 ]
It was clear to the RepRap community that the threaded-rod, triangular Z axis frame construction was limited in strength and stability, and that it would be necessary for the printer's footprint to grow substantially for the maximum printing height to increase. Chris Palmer (posting as "Nophead") created "Mendel90" in December 2011, a printer using a gantry-style MDF frame. [ 14 ] [ 15 ] [ 16 ] It improved printing speed and accuracy by replacing the upper supports on the Mendel frame (which were easily skewed or twisted out of alignment if not properly tightened) with a rigid frame cut from solid sheet material, assembled as one structural and two mechanical planes at 90 degree angles from one another. Prusa's two Z-axis motors were moved from floating mounts at the top to being fixed to the bottom of the vertical frame, and a Dibond composite panel made for a rigid mounting plate for the heated bed.
In May 2012, Průša released a major redesign, focused on ease of construction and use, and no longer structured around the simplest available common hardware as previous RepRap printers were. [ 17 ] The Prusa i3 used a rigid, single-piece water jet cut aluminium vertical frame with a large opening for the printing area and hard mounting points for the Z-axis components, similar to the Mendel90. A second frame piece served as a lightweight mount for the heated bed. Rather than having a baseplate, Prusa retained the M10 threaded rods to support the heated bed Y-axis. It used a single piece, food safe stainless steel hot end called the Prusa Nozzle which printed with 3 mm filament, and used M5 threaded rods as lead screws instead of M8. [ 18 ] [ 19 ] [ 20 ] [ 21 ] [ 22 ]
In May 2015, Průša released an i3 full kit under the brand name "Original Prusa i3". [ 1 ] After finding that 1.75 mm filament was far more common than 3 mm, Průša dropped the Prusa Nozzle and redesigned the extruder around a third-party hot end, the E3D V6-Lite. Noting that print quality was much improved, he introduced the new i3 1.75 mm version in August 2015. [ 23 ] [ 24 ] The original model was retrospectively dubbed MK0 or " Mark zero," while the new model came to be known as the "MK1."
Průša released the Prusa i3 MK2 in May 2016. It was the first hobby 3D printer with mesh bed leveling and automatic geometry skew correction for all three axes. Features included a larger build volume, custom stepper motors with integrated lead screws, a non-contact inductive sensor for auto-leveling, and a rewritten version of the Marlin firmware . [ 25 ] [ 26 ] [ 27 ] Other new features include a polyetherimide print surface, Rambo controller board and an E3D V6 Full hotend. [ 28 ] [ 29 ] The Prusa MK2 became the first RepRap printer to be supported by Windows 10 Plug-and-Play USB ID. [ 30 ]
In March 2017, Průša announced on his blog that the revised Prusa i3 MK2S would ship in place of the Prusa i3 MK2. [ 31 ] Enhancements cited include U-bolts to hold the LM8UU bearings where cable ties had been used, higher quality bearings and rods, an improved mount for the inductance sensor, improved cable management, and a new electronics cover. An upgrade kit was offered to owners of the MK2 to add these improvements.
In September 2017, Prusa i3 MK3 was released, marketed as "bloody smart." [ 32 ] Starting with this model, the base and Y axis were assembled with aluminum extrusion, eliminating the last of the structural threaded rods from the Mendel design. Included were a new extruder with dual Bondtech drive-gears, quieter fans with RPM monitoring, faster print speeds, an updated bed leveling sensor, a new electronics board named "Einsy", quieter stepper motors with 128 step microstepping drivers and a magnetic heatbed with interchangeable PEI -coated steel sheets. [ 33 ] Electrical components were updated to work with the new 24 volt power supply . The printer also offers dedicated sockets to connect Raspberry Pi Zero W running a fork of the open source OctoPrint software for wireless printing.
Ease-of-use features included a filament detector, allowing the printer to load filament when it is inserted, and to pause printing if the filament is jammed or runs out; error-correcting stepper motor drivers preventing layer shifts due to skipped steps; and recovery after power outages . The ambient temperature sensor both confirms suitable environment temperature and detects overheated electrical connections on the main board.
Existing MK2 and MK2S users were offered a $199 partial upgrade named MK2.5, limited to features which are cheaper to upgrade. [ 34 ] After negative feedback from the community, Prusa made available a more expensive $500 MK2S to MK3 full upgrade. [ 32 ] [ 35 ]
In February 2019, Prusa i3 MK3S was released, along with the Multi Material Upgrade 2S (MMU2S), which allows selecting any of 5 different materials for printing together automatically. [ 36 ] MK3S changes include a simplified opto-mechanical filament sensor, improved print cooling, and easier access to service the extruder. [ 37 ]
Prusa made a running change starting November, 2020 to the Prusa i3 MK3S+ . [ 38 ] This model has a revised bed leveling sensor and minor parts changes.
In March 2023 the company announced the Prusa MK4 and the Multi Material Unit version 3 (MMU3). [ 39 ] This model features a new version of their "Nextruder" extruder system first seen on the Prusa XL, no-adjustment load cell bed leveling, a modular replaceable all-metal hot end, a color touchscreen , and die-cast [ 40 ] aluminum frame, Y-carriage (heat bed support), and extruder frame. [ 41 ] The 32-bit main processor board includes additional safety and monitoring circuits, a network connector, a port for the MMU3, and a Wi-fi module. This is Prusa's first Mendel-based design to include support for local and cloud monitoring and support.
Switching to 0.9 degree stepping motors, and the addition of input shaping and pressure advance, allow the Mendel-style design to print faster while avoiding ringing artifacts and other undesirable patterns imposed on the object being made, even though it does not have the advantages of the box-like structure of CoreXY printers. [ 42 ] However, Průša has stated that print quality, not maximum speed, is their design goal. There is a provision for an accelerometer, often used in 3D printing for self-tuning of input shaping, but that component is not included in the final design.
When announced, software for input shaping, and sensor data collection were not finished, and the Multi Material Unit was not ready for release. Upgrade kits for earlier models likewise were not available for shipping. On February 5, 2024 upgrade kits to MK3.5 for the MK3 began shipping. [ 43 ] [ 44 ] Touch screen operation was not formally enabled until April 2024. [ 45 ]
In August 2024 Prusa released the Prusa MK4S along with upgrade kits for owners of previous Prusa i3 printers. [ 46 ] The MK4S brought marginal improvement over the previous MK4, with improved part cooling, faster print speeds, and more.
Following the MK3S, Prusa introduced other models such as the Prusa SL1 ( SLA printer), the Prusa Mini (with a cantilever arm), Prusa XL (using a CoreXY method inside a full-frame structure) and Prusa Core One, also using a CoreXY method. These printers are not iterations of the Mendel frame design (except Prusa Core One, which is).
With all aspects of the design freely available under open source and open hardware terms, companies and individuals around the world have produced Prusa i3 copies, variants, and upgrades in assembled and kit form, with thousands offered for sale as early as 2015. [ 47 ] [ 48 ] [ 49 ] Rather than compete directly with these, Prusa Research's strategy is to pursue continual refinement of its designs. [ 50 ]
All Prusa i3 models use 3D printing filament as feedstock to make parts.
Like other RepRap printers the Prusa i3 is capable of creating many of its own parts, with the designs freely available for repairs, replication, and redesign. Formerly these were printed in ABS plastic; Prusa Research later switched to mostly PETG printed parts, with ASA used for higher temperatures near the nozzle. [ 64 ] As of 2024 Prusa uses PETG with PCCF, a high-temperature polycarbonate blend with carbon fibers , on several printers including the MK4/MK4S. [ 65 ] Prusa maintains a "print farm" of 600 3D printers (as of October 2021) to manufacture the plastic parts for Original Prusa branded products, [ 66 ] [ 67 ] with select injection molded parts added to speed production.
When the Prusa i3 design was first introduced in 2012, RepRap printers frequently used Open Hardware controllers such as an Arduino Mega combined with an Arduino shield providing the remaining circuitry, such as the RAMPS board. [ 68 ] All-in-one versions such as the RAMBo board were becoming available. [ 69 ] As a commercial product, Original Prusa i3 up to MK2 used Mini-Rambo. MK3 versions switched to Einsy Rambo boards to provide desired features such as quieter operation. [ 70 ] The MK4 uses xBuddy, the first 32-bit board used in the i3 series. [ 71 ]
All Original Prusa products use Marlin 3D printing firmware. [ 72 ] [ 73 ] [ 74 ]
When extruding the first layer, the print head must be a precise distance away from the print bed for proper adhesion. Many 3D printers rely on the user to complete this process by adjusting the height of the bed at several locations ("bed leveling"). To automate this process, Prusa i3 models from the MK2 in 2016 have sensor called PINDA (Prusa INDuction Autoleveling [ 75 ] ) to detect the height of the printbed at different locations, and then adjust for it when printing ("auto-leveling"). [ 76 ]
The PINDA series requires an electronic Z-height adjustment that may vary for different heat bed surfaces or different nozzles. The load cell sensor automatically compensates for variations in nozzle size, and thickness and expansion of the heated bed surface, eliminating stored settings for the purpose.
The distinguishing feature of the i3 from its predecessors is the vertical frame, which can take many forms. These include single sheet frames cut from steel or acrylic, box frames from plywood or medium-density fibreboard , and Lego . [ 78 ] [ 79 ] [ 80 ] [ 81 ] Inexpensive aluminum extrusion is commonly used, both by printer enthusiasts and by manufacturers of "clone" i3 printers. [ 82 ] [ 83 ] Some mass market i3 derivatives, such as the Creality Ender 3, use rollers against the extruded frame itself instead of precision rods and bearings to reduce cost and complexity.
Beyond the standard Prusa i3 filament extruders, others have created aftermarket extruders and enthusiast tool heads, including a MIG welder and a laser cutter. [ 84 ] [ 85 ] [ 86 ] Prusa offered a collection of functional cooking tools and programs under the name "MK3 Master Chef Upgrade" as an April Fools' Day gag in 2018. [ 87 ] | https://en.wikipedia.org/wiki/Prusa_i3 |
The now-defunct Prydniprovsky Chemical Plant ( Ukrainian : Придніпровський хімічний завод, ПХЗ ; Prydniprovsky khimichnyi zavod , PHZ , also PChP ) in the city of Kamianske , Ukraine , processed uranium ore for the Soviet nuclear program from 1948 through 1991, preparing yellowcake .
Its processing wastes are now stored in nine open-air dumping grounds containing about 36 million tonnes of sand-like low- radioactive residue, occupying an area of 2.5 million square meters. The sites, improperly constructed from the very beginning, have been abandoned by the industry long ago and remain in very poor condition. The top concern is the dumps’ proximity to both the large Dnieper River and city residential areas. According to government experts, the dams separating the grounds from soil water are already leaking, causing the pollution of Dnieper basin. It is believed that further deterioration of the dams, irrespective of any outer accidents, may cause a devastating radioactive mudslide . The Ukrainian government is now tightening control over the grounds and seeking international aid in projects aimed at securing and the gradual re-processing of the PHZ wastes. Recently, the International Atomic Energy Agency has evaluated the condition of the sites and is considering dispatching a major observation and aid mission to Kamianske. [ 1 ]
From 1946 to 1972, the company was engaged in uranium enrichment (production of its nitrous oxide ) - the plant processed 65% of uranium ores in the Soviet Union. Attempts to recycle fuel elements began in 1974, but due to the growing number of oncological diseases in the city, this idea was abandoned. [ 2 ]
The isolated dump grounds (about nine altogether, at a depth of 3 m) of the former plant are now located in different parts of the city and operated by the purposely-created "Barrier" State Enterprise - with an obscure-meaning new name that has yet to be widely known. That is why the sites, the company, and the whole problem is still commonly referred to as the "Prydniprovsky Chemical Plant (PHZ) wastes".
In 1964, the first treatment facilities appeared at the enterprise. In 2003, the Cabinet of Ministers approved an 11-year program on "bringing hazardous facilities of the Prydniprovsky Chemical Plant to an environmentally safe state and ensuring protection of the population from the harmful effects of ionizing radiation". [ 3 ]
This Ukrainian corporation or company article is a stub . You can help Wikipedia by expanding it .
This waste -related article is a stub . You can help Wikipedia by expanding it .
This radioactivity –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Prydniprovsky_Chemical_Plant_radioactive_dumps |
Przybylski's Star (pronounced / p ʃ ɪ ˈ b ɪ l s k iː z / or / ʃ ɪ ˈ b ɪ l s k iː z / ), or HD 101065 , is a rapidly oscillating Ap star at roughly 356 light-years (109 parsecs ) from the Sun in the southern constellation of Centaurus . It has a unique spectrum showing over-abundances of most rare-earth elements , for example, but under-abundances of more common elements such as iron.
This star was possibly first seen by Benjamin Apthorp Gould on April 29, 1873, and catalogued as the 10th star of Zone 257 with right ascension of 11 h 31 m 32.89 s and declination of −46° 01′ 08″ (at epoch 1875.0) and apparent magnitude of 8.5. [ 15 ]
In 1961, the Polish-Australian astronomer Antoni Przybylski discovered that this star had a peculiar spectrum that would not fit into the standard framework for stellar classification . [ 16 ] [ 17 ] Przybylski's observations indicated unusually low amounts of iron and nickel in the star's spectrum , but higher amounts of unusual elements such as strontium , holmium , niobium , scandium , yttrium , caesium , neodymium , praseodymium , thorium , ytterbium , and uranium . In fact, at first Przybylski doubted that iron was present in the spectrum at all. Modern work shows that the iron group elements are somewhat below normal in abundance, but it is clear that the lanthanides and other exotic elements are highly over-abundant. [ 4 ]
There have been many attempts to assign a conventional spectral class to this star. The Henry Draper Catalogue gives a class of B5. More detailed analysis when the unusual nature of the star was discovered estimated a class of F8 or G0. Later studies gave classes of F0 or F5 to G0. [ 5 ] It is considered likely to be a main sequence star with a temperature somewhat hotter than the Sun , but with its spectral lines strongly blanketed by the extreme abundances of certain metals. [ 18 ] A catalogue of chemically peculiar stars gives the type F3 Ho, indicating an Ap star with an approximate spectral class of F3 and strong holmium lines. [ 6 ] Compared to neighboring stars, HD 101065 has a high peculiar velocity of 23.8 ± 1.9 km/s . [ 19 ]
With a mass of about 1.5 M ☉ and an age of around 1.5 billion years, HD 101065 is calculated to be right at the end of its main sequence life. It shines with a bolometric luminosity of about 5.6 L ☉ at an effective temperature of 6,131 K . It has a very slow projected rotational velocity for a hot main sequence star of just 3.5 km/s . Observations of its magnetic field suggest a possible rotation period of about 188 years, although this is considered a minimum possible value. [ 4 ] A metallicity index ([Fe/H]) of −2.40 has been published, suggesting levels of metals just a few percent of the Sun's, but this single value does not adequately represent the chemical makeup shown in the star's unique spectrum. Levels of some other metals as derived from the spectrum are thousands of times higher than in the Sun. [ 11 ] Also, because the chemical peculiarities of Ap stars in general are largely due to stratification of elements allowed by very slow rotation, the published metallicity also might not represent the proportion of heavy elements in the whole star. [ 4 ]
HD 101065 is the prototype star of the rapidly oscillating Ap star (roAP) variable star class. In 1978, it was discovered to pulsate photometrically with a period of 12.15 min . [ 20 ]
A potential companion had also been detected, a 14th-magnitude star (in infrared) 8 arc seconds away. This could have meant a separation of just 1,000 AU (0.02 light-years); [ 21 ] however, Gaia Data Release 2 suggests that while those two stars appear as separated by a very close angle, the actual distance from this second star to Earth is 890 ± 90 light-years , which is more than twice the distance to Przybylski's Star. [ 22 ]
Przybylski's star has occasionally attracted attention as a SETI candidate [ 23 ] insofar as it aligns with speculation that a technological species may salt the photosphere of its star with unusual elements, either to signal its presence to other civilizations [ 24 ] [ 25 ] or to dispose of nuclear waste . [ 26 ] | https://en.wikipedia.org/wiki/Przybylski's_Star |
In mathematics , the Prékopa–Leindler inequality is an integral inequality closely related to the reverse Young's inequality , the Brunn–Minkowski inequality and a number of other important and classical inequalities in analysis . The result is named after the Hungarian mathematicians András Prékopa and László Leindler . [ 1 ] [ 2 ]
Let 0 < λ < 1 and let f , g , h : R n → [0, +∞) be non- negative real-valued measurable functions defined on n -dimensional Euclidean space R n . Suppose that these functions satisfy
for all x and y in R n . Then
Recall that the essential supremum of a measurable function f : R n → R is defined by
This notation allows the following essential form of the Prékopa–Leindler inequality: let 0 < λ < 1 and let f , g ∈ L 1 ( R n ; [0, +∞)) be non-negative absolutely integrable functions. Let
Then s is measurable and
The essential supremum form was given by Herm Brascamp and Elliott Lieb . [ 3 ] Its use can change the left side of the inequality. For example, a function g that takes the value 1 at exactly one point will not usually yield a zero left side in the "non-essential sup" form but it will always yield a zero left side in the "essential sup" form.
It can be shown that the usual Prékopa–Leindler inequality implies the Brunn–Minkowski inequality in the following form: if 0 < λ < 1 and A and B are bounded , measurable subsets of R n such that the Minkowski sum (1 − λ ) A + λ B is also measurable, then
where μ denotes n -dimensional Lebesgue measure . Hence, the Prékopa–Leindler inequality can also be used [ 4 ] to prove the Brunn–Minkowski inequality in its more familiar form: if 0 < λ < 1 and A and B are non- empty , bounded , measurable subsets of R n such that (1 − λ ) A + λ B is also measurable, then
The Prékopa–Leindler inequality is useful in the theory of log-concave distributions , as it can be used to show that log-concavity is preserved by marginalization and independent summation of log-concave distributed random variables. Since, if X , Y {\displaystyle X,Y} have pdf f , g {\displaystyle f,g} , and X , Y {\displaystyle X,Y} are independent, then f ⋆ g {\displaystyle f\star g} is the pdf of X + Y {\displaystyle X+Y} , we also have that the convolution of two log-concave functions is log-concave.
Suppose that H ( x , y ) is a log-concave distribution for ( x , y ) ∈ R m × R n , so that by definition we have
and let M ( y ) denote the marginal distribution obtained by integrating over x :
Let y 1 , y 2 ∈ R n and 0 < λ < 1 be given. Then equation ( 2 ) satisfies condition ( 1 ) with h ( x ) = H ( x ,(1 − λ )y 1 + λy 2 ), f ( x ) = H ( x , y 1 ) and g ( x ) = H ( x , y 2 ), so the Prékopa–Leindler inequality applies. It can be written in terms of M as
which is the definition of log-concavity for M .
To see how this implies the preservation of log-convexity by independent sums, suppose that X and Y are independent random variables with log-concave distribution. Since the product of two log-concave functions is log-concave, the joint distribution of ( X , Y ) is also log-concave. Log-concavity is preserved by affine changes of coordinates, so the distribution of ( X + Y , X − Y ) is log-concave as well. Since the distribution of X+Y is a marginal over the joint distribution of ( X + Y , X − Y ), we conclude that X + Y has a log-concave distribution.
The Prékopa–Leindler inequality can be used to prove results about concentration of measure.
Theorem [ citation needed ] Let A ⊆ R n {\textstyle A\subseteq \mathbb {R} ^{n}} , and set A ϵ = { x : d ( x , A ) < ϵ } {\textstyle A_{\epsilon }=\{x:d(x,A)<\epsilon \}} . Let γ ( x ) {\textstyle \gamma (x)} denote the standard Gaussian pdf, and μ {\textstyle \mu } its associated measure. Then μ ( A ϵ ) ≥ 1 − e − ϵ 2 / 4 μ ( A ) {\textstyle \mu (A_{\epsilon })\geq 1-{\frac {e^{-\epsilon ^{2}/4}}{\mu (A)}}} .
The proof of this theorem goes by way of the following lemma:
Lemma In the notation of the theorem, ∫ R n exp ( d ( x , A ) 2 / 4 ) d μ ≤ 1 / μ ( A ) {\textstyle \int _{\mathbb {R} ^{n}}\exp(d(x,A)^{2}/4)d\mu \leq 1/\mu (A)} .
This lemma can be proven from Prékopa–Leindler by taking h ( x ) = γ ( x ) , f ( x ) = e d ( x , A ) 2 4 γ ( x ) , g ( x ) = 1 A ( x ) γ ( x ) {\textstyle h(x)=\gamma (x),f(x)=e^{\frac {d(x,A)^{2}}{4}}\gamma (x),g(x)=1_{A}(x)\gamma (x)} and λ = 1 / 2 {\textstyle \lambda =1/2} . To verify the hypothesis of the inequality, h ( x + y 2 ) ≥ f ( x ) g ( y ) {\textstyle h({\frac {x+y}{2}})\geq {\sqrt {f(x)g(y)}}} , note that we only need to consider y ∈ A {\textstyle y\in A} , in which case d ( x , A ) ≤ | | x − y | | {\textstyle d(x,A)\leq ||x-y||} . This allows us to calculate:
Since ∫ h ( x ) d x = 1 {\textstyle \int h(x)dx=1} , the PL-inequality immediately gives the lemma.
To conclude the concentration inequality from the lemma, note that on R n ∖ A ϵ {\textstyle \mathbb {R} ^{n}\setminus A_{\epsilon }} , d ( x , A ) > ϵ {\textstyle d(x,A)>\epsilon } , so we have ∫ R n exp ( d ( x , A ) 2 / 4 ) d μ ≥ ( 1 − μ ( A ϵ ) ) exp ( ϵ 2 / 4 ) {\textstyle \int _{\mathbb {R} ^{n}}\exp(d(x,A)^{2}/4)d\mu \geq (1-\mu (A_{\epsilon }))\exp(\epsilon ^{2}/4)} . Applying the lemma and rearranging proves the result. | https://en.wikipedia.org/wiki/Prékopa–Leindler_inequality |
The Prévost reaction is a chemical reaction in which an alkene is converted by iodine and the silver salt of benzoic acid to a vicinal diol with anti stereochemistry. [ 1 ] [ 2 ] [ 3 ] The reaction was discovered by the French chemist Charles Prévost (1899–1983).
The reaction between silver benzoate ( 1 ) and iodine is very fast and produces a very reactive iodinium benzoate intermediate ( 2 ). The reaction of the iodinium salt ( 2 ) with an alkene gives another short-lived iodinium salt ( 3 ). Nucleophilic substitution ( S N 2 ) by the benzoate salt gives the ester ( 4 ). Another silver ion causes the neighboring group substitution of the benzoate ester to give the oxonium salt ( 5 ). A second S N 2 substitution by the benzoate anion gives the desired diester ( 6 ).
In the final step hydrolysis of the ester groups gives the anti-diol. This outcome is the opposite of that of the related Woodward cis-hydroxylation which gives syn addition . | https://en.wikipedia.org/wiki/Prévost_reaction |
In mathematics, the Prüfer manifold or Prüfer surface is a 2-dimensional Hausdorff real analytic manifold that is not paracompact . It was introduced by Radó (1925) and named after Heinz Prüfer .
The Prüfer manifold can be constructed as follows ( Spivak 1979 , appendix A). Take an uncountable number of copies X a of the plane, one for each real number a , and take a copy H of the upper half plane (of pairs ( x , y ) with y > 0). Then glue the open upper half of each plane X a to the upper half plane H by identifying ( x , y )∈ X a for y > 0 with the point ( a + yx , y ) in H . The resulting quotient space Q is the Prüfer manifold. The images in Q of the points (0,0) of the spaces X a under identification form an uncountable discrete subset. | https://en.wikipedia.org/wiki/Prüfer_manifold |
The moss millipede ( Psammodesmus bryophorus ) is a keeled millipede of the family Platyrhacidae native to Colombia . It was described in 2011, and with several species of symbiotic moss found growing on its dorsal surface, it is the first millipede known with epizoic plants . [ 1 ] [ 2 ] [ 3 ]
At least 10 species of bryophytes belonging to families Pilotrichaceae , Lejeuneaceae , Fissidentaceae , Metzgeriaceae and Leucomiaceae have been found to grow on the millipede's dorsum; [ 2 ] these plants are believed to camouflage the millipede as its cuticle provides a stable substrate.
Adult moss millipedes have 19 body segments, each with a pair of wide keels; the coloration of their dorsum ranges from dark brown to black, having two light-colored stripes on the prozonites and metatergites of segments 2-19. [ 4 ] The edges of the paranota are white and the legs, antennae and ventral surface of the trunk are reddish brown.
P. bryophorus is found in Río Ñambí Natural Reserve, a transitional Andean-Pacific forest in South West Colombia, [ 4 ] preferring tree trunks and leaves, about 1m above the ground; however, they can also be found between the leaf litter and the soil surface. [ 2 ]
This millipede -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Psammodesmus_bryophorus |
Psammon (from Greek "psammos", "sand" [ 1 ] ) is a group of organisms inhabiting coastal sand moist — biota buried in sediments. Psammon is a part of water fauna, along with periphyton , plankton , nekton , and benthos . [ 2 ] Psammon is also sometimes considered a part of benthos due to its near-bottom distribution. [ 3 ] Psammon term is commonly used to refer to freshwater reservoirs such as lakes. [ 2 ] [ 4 ] | https://en.wikipedia.org/wiki/Psammon |
Psammophory is a method by which certain plants armor themselves with sand on their body parts, lowering the chance of them being eaten by animals. Psammophory occurs in plants of the genus Psammophora , which have a viscous mucus on the surface of their leaves, to which sand particles stick. Over 200 species of plants hailing from 88 genera in 34 families have been identified as psammorphorous. [ 1 ] This adaptive mechanism is used not only by plants but also by some insects.
In plants, psammophory allows the formation of a protective layer of sand on their stems and leaves. This layer may reduce the likelihood of damage to the plant by herbivores and insects. [ 2 ] In insects, this mechanism also has a defensive function. For example, some insects, such as certain species of beetles, can actively coat their bodies with sand or dust, which allows them to become less visible to predators and provides an additional layer of protection. This phenomenon is also characteristic of Crassula alpestris .
The term was first proposed in 1989 by scientists studying the habits of the beetle Georissus which actively covers its elytra with sand or mud particles. It was further documented in various studies from the University of California, Davis. [ 3 ]
This phenomenon is often associated with the beetle family Georissidae as a whole. However, so far only the following species have been documented to have this ability: Georissidae crenulatus , Georissidae canalifer , Georissidae californicus and Georissidae pusillus .
A similar term, “ psammophore ,” refers to a formation of bristles and hairs on the underside of the head of some ants and wasps, which serves to carry small particles of soil, sand, small seeds, and eggs. [ 4 ] | https://en.wikipedia.org/wiki/Psammophory |
A psammophyte is a plant that grows in sandy and often unstable soils. Psammophytes are commonly found growing on beaches , deserts , and sand dunes . Because they thrive in these challenging or inhospitable habitats , psammophytes are considered extremophiles , and are further classified as a type of psammophile .
The word "psammophyte" consists of two Greek roots , psamm- , meaning "sand", and -phyte , meaning "plant". [ 1 ] [ 2 ] [ 3 ] The term "psammophyte" first entered English in the early twentieth century via German botanical terminology. [ 4 ]
Psammophytes are found in many different plant families, so may not share specific morphological or phytochemical traits. They also come in a variety of plant life-forms, including annual ephemerals , perennials , subshrubs , hemicryptophytes , and many others. [ 5 ] [ 6 ] What the many diverse psammophytes have in common is a resilience to harsh or rapidly fluctuating environmental factors, such as shifting soils, strong winds, intense sunlight exposure, or saltwater exposure, depending on the habitat. [ 6 ] [ 7 ] Psammophytes often have specialized traits, such as unusually tenacious or resilient roots that enable them to anchor and thrive despite various environmental stressors. [ 8 ] Those growing in arid regions have evolved highly efficient physiological mechanisms that enable them to survive despite limited water availability. [ 9 ] [ 10 ]
Psammophytes grow in regions all over the world and can be found on sandy, unstable soils of beaches , deserts , and sand dunes . [ 5 ] [ 6 ] [ 7 ] [ 11 ] [ 12 ] In China's autonomous Inner Mongolia region, psammophytic woodlands are found in steppe habitats. [ 13 ]
Psammophytes often play an important ecological role by contributing some degree of soil stabilization in their sandy habitats. [ 14 ] They can also play an important role in soil nutrient dynamics . [ 15 ] Depending on the factors at play at a given site, psammophyte communities exhibit varying degrees of species diversity . [ 16 ] [ 17 ] [ 5 ] [ 12 ] For example, in the dunes of the Sahara Desert , psammophyte communities exhibit limited diversity and are predominantly made up of plants from the grass and mustard families. [ 5 ]
Like many other types of plants, psammophytes can have symbiotic relationships with microorganisms called endophytes that live inside of their tissues, which can impart enhanced growth or other benefits. [ 18 ]
A major threat to psammophytes in many regions is dune destabilization, which is exacerbated by human development projects and factors associated with climate change , such as drought and temperature increases. [ 11 ] Encroachment of non-psammophytic plants and invasive species poses another threat to psammophyte species in some areas. [ 12 ] [ 16 ] [ 19 ] Ecological restoration efforts in psammophyte habitats often aim to utilize the natural soil stabilizing and nutrient enhancement abilities of psammophytes as part of restoration strategies. [ 16 ] [ 15 ] Another important strategy is restoring and protecting the requisite soil microbiome some psammophytes require to thrive. [ 19 ]
China's Minqin Garden of Desert Plants is one organization that is actively working on efforts to conserve both wild and horticultural psammophyte species. [ 20 ]
Some examples of psammophyte species include: | https://en.wikipedia.org/wiki/Psammophyte |
The Pschorr cyclization is a name reaction in organic chemistry , which was named after its discoverer, the German chemist Robert Pschorr (1868-1930). It describes the intramolecular substitution of aromatic compounds via aryldiazonium salts as intermediates and is catalyzed by copper . The reaction is a variant of the Gomberg-Bachmann reaction . [ 1 ] The following reaction scheme shows the Pschorr cyclization for the example of phenanthrene : [ 2 ]
In the course of the Pschorr cyclization, a diazotization of the starting compound occurs, so that an aryldiazonium salt is formed as intermediate. For this, sodium nitrite is added to hydrochloric acid to obtain nitrous acid . The nitrous acid is protonated and reacts with another equivalent of nitrous acid to the intermediate 1 which is later used for the diazotization of the aromatic amine :
The intermediate 1 reacts in the following way with the starting compound: [ 3 ]
Intermediate 1 replaces a hydrogen atom from the amino group of the starting compound. A nitroso group is introduced as new substituent, producing under the release of nitrous acid intermediate 2 . Intermediate 2 then reacts via a tautomerism and dehydration to the aryldiazonium cation 3 .
Nitrogen is then cleaved from the aryldiazonium cation 3 by the use of the copper catalyst. The aryl radical thus formed reacts via ring closure to the intermediate stage 4 . Finally, rearomatization takes place using again the copper catalyst and phenanthrene is formed.
The Pschorr cyclization has a relatively good atom economy , since essentially only nitrogen is produced as a waste material. For the diazotization , two equivalents of nitrous acid are used, of which one equivalent is being re-formed in the course of the reaction. The copper is used in catalytic amounts only and does therefore not affect the atomic efficiency of the reaction. However, when considering the atom economy it has to be mentioned that the Pschorr cyclization has often only low yields . [ 2 ] [ 3 ] | https://en.wikipedia.org/wiki/Pschorr_cyclization |
Pseudin is a peptide derived from Pseudis paradoxa . [ 1 ] Pseudins have some antimicrobial function. [ 2 ] [ 3 ]
There are several different forms:
Pseudin-2 is the most abundant version of the pseudins found on the skin of the paradoxical frog. [ 8 ] The primary sequence reads as GLNALKKVFQGIHEAIKLINNHVQ. Its secondary/tertiary structure consists of one cationic amphipathic α-helix . [ 8 ] [ 9 ]
Pseudin-2 was shown to have potent antibacterial activity, but a lower cytotoxicity. [ 8 ] The cytotoxicity of a peptide can be measured by its effect on human erythrocytes . [ 9 ] It takes a lower concentration of Pseudin-2 to kill bacteria or fungi such as E. coli , S. aureus , and C. albicans than to kill human erythrocytes. [ 8 ] It is hypothesized that Pseudin-2 binding to the cell membrane of the bacteria results in a conformational change in which the peptide forms an α-helical shape, which allows it to perform cell lysis by inserting itself in the hydrophobic portion of the membrane. [ 8 ] [ 9 ] This mechanism is applicable to similar amphipathic α-helical peptides created by many frog species, although most of these peptides aren't very potent against bacteria. [ 10 ] By increasing the cationicity and amphipathic nature of the molecule, it is possible to create analogues of Pseudin-2 that are even more selective towards bacteria. This is done by substituting leucine residues with lysine residues and glycine residues with proline residues, which results in two shorter α-helices (linked by the substituted proline) that are more attuned to penetrating bacterial cell membranes. [ 9 ]
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pseudin |
In mathematics , specifically in topology , a pseudo-Anosov map is a type of a diffeomorphism or homeomorphism of a surface . It is a generalization of a linear Anosov diffeomorphism of the torus . Its definition relies on the notion of a measured foliation introduced by William Thurston , who also coined the term "pseudo-Anosov diffeomorphism" when he proved his classification of diffeomorphisms of a surface .
A measured foliation F on a closed surface S is a geometric structure on S which consists of a singular foliation and a measure in the transverse direction. In some neighborhood of a regular point of F , there is a "flow box" φ : U → R 2 which sends the leaves of F to the horizontal lines in R 2 . If two such neighborhoods U i and U j overlap then there is a transition function φ ij defined on φ j ( U j ), with the standard property
which must have the form
for some constant c . This assures that along a simple curve, the variation in y -coordinate, measured locally in every chart, is a geometric quantity (i.e. independent of the chart) and permits the definition of a total variation along a simple closed curve on S . A finite number of singularities of F of the type of " p -pronged saddle", p ≥3, are allowed. At such a singular point, the differentiable structure of the surface is modified to make the point into a conical point with the total angle πp . The notion of a diffeomorphism of S is redefined with respect to this modified differentiable structure. With some technical modifications, these definitions extend to the case of a surface with boundary.
A homeomorphism
of a closed surface S is called pseudo-Anosov if there exists a transverse pair of measured foliations on S , F s (stable) and F u (unstable), and a real number λ > 1 such that the foliations are preserved by f and their transverse measures are multiplied by 1/ λ and λ . The number λ is called the stretch factor or dilatation of f .
Thurston constructed a compactification of the Teichmüller space T ( S ) of a surface S such that the action induced on T ( S ) by any diffeomorphism f of S extends to a homeomorphism of the Thurston compactification. The dynamics of this homeomorphism is the simplest when f is a pseudo-Anosov map: in this case, there are two fixed points on the Thurston boundary, one attracting and one repelling, and the homeomorphism behaves similarly to a hyperbolic automorphism of the Poincaré half-plane . A "generic" diffeomorphism of a surface of genus at least two is isotopic to a pseudo-Anosov diffeomorphism.
Using the theory of train tracks , the notion of a pseudo-Anosov map has been extended to self-maps of graphs (on the topological side) and outer automorphisms of free groups (on the algebraic side). This leads to an analogue of Thurston classification for the case of automorphisms of free groups, developed by Bestvina and Handel. | https://en.wikipedia.org/wiki/Pseudo-Anosov_map |
Pseudo-modal energies are used for estimating the energy content of a mechanical system near its resonance frequencies. They are defined as the integral of the frequency response function within a certain bandwidth around a resonance. [ 1 ]
This physics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pseudo-modal_energies |
In constructive mathematics , pseudo-order is a name given to certain binary relations appropriate for modeling continuous orderings.
In classical mathematics, its axioms constitute a formulation of a strict total order (also called linear order), which in that context can also be defined in other, equivalent ways.
The constructive theory of the real numbers is the prototypical example where the pseudo-order formulation becomes crucial. A real number is less than another if there exists (one can construct) a rational number greater than the former and less than the latter. In other words, here x < y holds if there exists a rational number z such that x < z < y .
Notably, for the continuum in a constructive context, the usual trichotomy law does not hold, i.e. it is not automatically provable. The axioms in the characterization of orders like this are thus weaker (when working using just constructive logic) than alternative axioms of a strict total order, which are often employed in the classical context.
A pseudo-order is a binary relation satisfying the three conditions:
There are common constructive reformulations making use of contrapositions and the valid equivalences ¬ ( ϕ ∧ ψ ) ↔ ( ϕ → ¬ ψ ) {\displaystyle \neg (\phi \land \psi )\leftrightarrow (\phi \to \neg \psi )} as well as ¬ ( ϕ ∨ ψ ) ↔ ( ¬ ϕ ∧ ¬ ψ ) {\displaystyle \neg (\phi \lor \psi )\leftrightarrow (\neg \phi \land \neg \psi )} . The negation of the pseudo-order x < y {\displaystyle x<y} of two elements defines a reflexive partial order y ≤ x {\displaystyle y\leq x} . In these terms, the first condition reads
and it really just expresses the asymmetry of x < y {\displaystyle x<y} . It implies irreflexivity , as familiar from the classical theory.
The second condition exactly expresses the anti-symmetry of the associated partial order,
With the above two reformulations, the negation signs may be hidden in the definition of a pseudo-order.
A natural apartness relation on a pseudo-ordered set is given by x # y := ( x < y ∨ y < x ) {\displaystyle x\#y:=(x<y\lor y<x)} . With it, the second condition exactly states that this relation is tight,
Together with the first axiom, this means equality can be expressed as negation of apartness. Note that the negation of equality is in general merely the double-negation of apartness.
Now the disjunctive syllogism may be expressed as ( ϕ ∨ ψ ) → ( ¬ ϕ → ψ ) {\displaystyle (\phi \lor \psi )\to (\neg \phi \to \psi )} . Such a logical implication can classically be reversed, and then this condition exactly expresses trichotomy. As such, it is also a formulation of connectedness .
The non-contradiction principle for the partial order states that ¬ ( x ≤ y ∧ ¬ ( x ≤ y ) ) {\displaystyle \neg {\big (}x\leq y\land \neg (x\leq y))} or equivalently ¬ ¬ ( x ≤ y ∨ y < x ) {\displaystyle \neg \neg {\big (}x\leq y\lor y<x)} , for all elements. Constructively, the validity of the double-negation exactly means that there cannot be a refutation of any of the disjunctions in the classical claim ∀ x . ∀ y . ¬ ( y < x ) ∨ ( y < x ) {\displaystyle \forall x.\forall y.\neg (y<x)\lor (y<x)} , whether or not this proposition represents a decidable problem .
Using the asymmetry condition, the above also implies ¬ ¬ ( x ≤ y ∨ y ≤ x ) {\displaystyle \neg \neg (x\leq y\lor y\leq x)} , the double-negated strong connectedness . In a classical logic context, " ≤ {\displaystyle \leq } " thus constitutes a (non-strict) total order .
The contrapositive of the third condition exactly expresses that the associated relation x ≤ y {\displaystyle x\leq y} (the partial order) is transitive. So that property is called co-transitivity . Using the asymmetry condition, one quickly derives the theorem that a pseudo-order is actually transitive as well. Transitivity is common axiom in the classical definition of a linear order.
The condition also called is comparison (as well as weak linearity ): For any nontrivial interval given by some x {\displaystyle x} and some y {\displaystyle y} above it, any third element z {\displaystyle z} is either above the lower bound or below the upper bound. Since this is an implication of a disjunction, it ties to the trichotomy law as well. And indeed, having a pseudo-order on a Dedekind-MacNeille-complete poset implies the principle of excluded middle. This impacts the discussion of completeness in the constructive theory of the real numbers.
This section assumes classical logic. At least then, following properties can be proven:
If R is a co-transitive relation, then
Sufficient conditions for a co-transitive relation R to be transitive also are:
A semi-connex relation R is also co-transitive if it is symmetric , left or right Euclidean, transitive, or quasitransitive. If incomparability w.r.t. R is a transitive relation, then R is co-transitive if it is symmetric, left or right Euclidean, or transitive. | https://en.wikipedia.org/wiki/Pseudo-order |
Pseudo-panspermia (sometimes called soft panspermia , molecular panspermia or quasi-panspermia ) is a well-supported hypothesis for a stage in the origin of life . The theory first asserts that many of the small organic molecules used for life originated in space (for example, being incorporated in the solar nebula , from which the planets condensed). It continues that these organic molecules were distributed to planetary surfaces, where life then emerged on Earth and perhaps on other planets . Pseudo-panspermia differs from the fringe theory of panspermia , which asserts that life arrived on Earth from distant planets. [ 1 ]
Theories of the origin of life have been recorded since the 5th century BC, when the Greek philosopher Anaxagoras proposed an initial version of panspermia: life arrived on earth from the heavens. [ 3 ] In modern times, full panspermia has little support amongst mainstream scientists . [ 1 ] Pseudo-panspermia, in which molecules are formed and transported through space is, however, well-supported. [ 2 ]
Interstellar molecules are formed by chemical reactions within very sparse interstellar or circumstellar clouds of dust and gas. Usually this occurs when a molecule becomes ionised , often as the result of an interaction with cosmic rays . This positively charged molecule then draws in a nearby reactant by electrostatic attraction of the neutral molecule's electrons. Molecules can also be generated by reactions between neutral atoms and molecules, although this process is generally slower. [ 4 ] The dust plays a critical role of shielding the molecules from the ionizing effect of ultraviolet radiation emitted by stars. [ 5 ] The Murchison meteorite contains the organic molecules uracil and xanthine , [ 6 ] [ 7 ] which must therefore already have been present in the early Solar System, where they could have played a role in the origin of life. [ 8 ]
Nitriles , key molecular precursors of the RNA World scenario, are among the most abundant chemical families in the universe and have been found in molecular clouds in the center of the Milky Way, protostars of different masses, meteorites and comets, and also in the atmosphere of Titan, the largest moon of Saturn. [ 9 ] [ 10 ]
Evidence for the extraterrestrial creation of organic molecules includes both their discovery in various contexts in space, and their laboratory synthesis under extraterrestrial conditions:
Organic molecules can then be distributed to planets including Earth both when the planets formed and later. If the materials from which planets formed contained organic molecules, and were not destroyed by heat or other processes, then these would be available for abiogenesis on those planets.
Later distribution is by means of bodies such as comets and asteroids . These may fall to the planetary surface as meteorites , releasing any molecules they are carrying as they vaporise on impact or later as they erode.
Studies of rock and dust from asteroid Bennu delivered to Earth by NASA’s OSIRIS-REx have revealed molecules that, on Earth, are key to life, as well as a history of saltwater. [ 27 ]
Findings of organic molecules in meteorites include: | https://en.wikipedia.org/wiki/Pseudo-panspermia |
Pseudo-range multilateration , often simply multilateration ( MLAT ) when in context, is a technique for determining the position of an unknown point, such as a vehicle, based on measurement of biased times of flight (TOFs) of energy waves traveling between the vehicle and multiple stations at known locations.
TOFs are biased by synchronization errors in the difference between times of arrival (TOA) and times of transmission (TOT): TOF=TOA-TOT . Pseudo-ranges (PRs) are TOFs multiplied by the wave propagation speed: PR=TOF ⋅ s . In general, the stations' clocks are assumed synchronized but the vehicle's clock is desynchronized.
In MLAT for surveillance , the waves are transmitted by the vehicle and received by the stations; the TOT is unique and unknown, while the TOAs are multiple and known. When MLAT is used for navigation (as in hyperbolic navigation ), the waves are transmitted by the stations and received by the vehicle; in this case, the TOTs are multiple but known, while the TOA is unique and unknown. In navigation applications, the vehicle is often termed the "user"; in surveillance applications, the vehicle may be termed the "target".
The vehicle's clock is considered an additional unknown, to be estimated along with the vehicle's position coordinates.
If d {\displaystyle d} is the number of physical dimensions being considered (e.g., 2 for a plane) and m {\displaystyle m} is the number of signals received (thus, TOFs measured), it is required that m ≥ d + 1 {\displaystyle m\geq d+1} .
Processing is usually required to extract the TOAs or their differences from the received signals, and an algorithm is usually required to solve this set of equations. An algorithm either: (a) determines numerical values for the TOT (for the receiver(s) clock) and d {\displaystyle d} vehicle coordinates; or (b) ignores the TOT and forms m − 1 {\displaystyle m-1} (at least d {\displaystyle d} ) time difference of arrivals (TDOAs), which are used to find the d {\displaystyle d} vehicle coordinates. Almost always, d = 2 {\displaystyle d=2} (e.g., a plane or the surface of a sphere) or d = 3 {\displaystyle d=3} (e.g., the real physical world). Systems that form TDOAs are also called hyperbolic systems, [ 1 ] for reasons discussed below.
A multilateration navigation system provides vehicle position information to an entity "on" the vehicle (e.g., aircraft pilot or GPS receiver operator). A multilateration surveillance system provides vehicle position to an entity "not on" the vehicle (e.g., air traffic controller or cell phone provider). By the reciprocity principle, any method that can be used for navigation can also be used for surveillance, and vice versa (the same information is involved).
Systems have been developed for both TOT and TDOA (which ignore TOT) algorithms. In this article, TDOA algorithms are addressed first, as they were implemented first. Due to the technology available at the time, TDOA systems often determined a vehicle location in two dimensions. TOT systems are addressed second. They were implemented, roughly, post-1975 and usually involve satellites. Due to technology advances, TOT algorithms generally determine a user/vehicle location in three dimensions. However, conceptually, TDOA or TOT algorithms are not linked to the number of dimensions involved.
Prior to deployment of GPS and other global navigation satellite systems (GNSSs), pseudo-range multilateration systems were often defined as (synonymous with) TDOA systems – i.e., systems that measured TDOAs or formed TDOAs as the first step in processing a set of measured TOAs. However, as result of deployment of GNSSs (which must determine TOT), two issues arose: (a) What system type are GNSSs (pseudo-range multilateration, true-range multilateration, or another system type)? (b) What are the defining characteristic(s) of a pseudo-range multilateration system? (There are no deployed multilateration surveillance systems that determine TOT, but they have been analyzed. [ 2 ] )
Pseudo-range multilateration navigation systems have been developed utilizing a variety of radio frequencies and waveforms — low-frequency pulses (e.g., Loran-C); low-frequency continuous sinusoids (e.g., Decca); high-frequency continuous wide-band (e.g., GPS). Pseudo-range multilateration surveillance systems often use existing pulsed transmitters (if suitable) — e.g., Shot-Spotter, ASDE-X and WAM.
Virtually always, the coordinate frame is selected based on the wave trajectories. Thus, two- or three-dimensional Cartesian frames are selected most often, based on straight-line (line-of-sight) wave propagation. However, polar (also termed circular/spherical) frames are sometimes used, to agree with curved earth-surface wave propagation paths. Given the frame type, the origin and axes orientation can be selected, e.g., based on the station locations. Standard coordinate frame transformations may be used to place results in any desired frame. For example, GPS receivers generally compute their position using rectangular coordinates, then transform the result to latitude, longitude and altitude.
Given m {\displaystyle m} received signals, TDOA systems form m − 1 {\displaystyle m-1} differences of TOA pairs (see "Calculating TDOAs or TOAs from received signals" below). All received signals must be a member of at least one TDOA pair, but otherwise the differences used are arbitrary (any two of the several sets of TDOAs can be related by an invertible linear transformation). Thus, when forming a TDOA, the order of the two TOAs involved is not important.
Some operational TDOA systems (e.g., Loran-C) designate one station as the "master" and form their TDOAs as the difference of the master's TOA and the m − 1 {\displaystyle m-1} "secondary" stations' TOAs. When m = 3 {\displaystyle m=3} , there are 3 {\displaystyle 3} possible TDOA combinations, each corresponding to a station being the de facto master. When m = 4 {\displaystyle m=4} , there are 16 {\displaystyle 16} possible TDOA sets, 12 {\displaystyle 12} of which do not have a de facto master. When m = 5 {\displaystyle m=5} , there are 135 {\displaystyle 135} possible TDOA sets, 130 {\displaystyle 130} of which do not have a de facto master.
If a pulse is emitted from a vehicle, it will generally arrive at slightly different times at spatially separated receiver sites, the different TOAs being due to the different distances of each receiver from the vehicle. However, for given locations of any two receivers, a set of emitter locations would give the same time difference (TDOA). Given two receiver locations and a known TDOA, the locus of possible emitter locations is one half of a two-sheeted hyperboloid .
In simple terms, with two receivers at known locations, an emitter can be located onto one hyperboloid (see Figure 1). [ 5 ] Note that the receivers do not need to know the absolute time at which the pulse was transmitted – only the time difference is needed. However, to form a useful TDOA from two measured TOAs, the receiver clocks must be synchronized with each other.
Consider now a third receiver at a third location which also has a synchronized clock. This would provide a third independent TOA measurement and a second TDOA (there is a third TDOA, but this is dependent on the first two TDOAs and does not provide additional information). The emitter is located on the curve determined by the two intersecting hyperboloids. A fourth receiver is needed for another independent TOA and TDOA. This will give an additional hyperboloid, the intersection of the curve with this hyperboloid gives one or two solutions, the emitter is then located at one of the two solutions.
With four synchronized receivers there are 3 independent TDOAs, and three independent parameters are needed for a point in three dimensional space. (And for most constellations, three independent TDOAs will still give two points in 3D space).
With additional receivers enhanced accuracy can be obtained. (Specifically, for GPS and other GNSSs, the atmosphere does influence the traveling time of the signal and more satellites does give a more accurate location).
For an over-determined constellation (more than 4 satellites/TOAs) a least squares method can be used for 'reducing' the errors. Averaging over longer times can also improve accuracy.
The accuracy also improves if the receivers are placed in a configuration that minimizes the error of the estimate of the position. [ 6 ]
The emitter may, or may not, cooperate in the multilateration surveillance process. Thus, multilateration surveillance is used with non-cooperating "users" for military and scientific purposes as well as with cooperating users (e.g., in civil transportation).
Multilateration can also be used by a single receiver to locate itself, by measuring signals emitted from synchronized transmitters at known locations (stations). At least three emitters are needed for two-dimensional navigation (e.g., the Earth's surface); at least four emitters are needed for three-dimensional navigation. Although not true for real systems, for expository purposes, the emitters may be regarded as each broadcasting narrow pulses (ideally, impulses) at exactly the same time on separate frequencies (to avoid interference). In this situation, the receiver measures the TOAs of the pulses. In actual TDOA systems, the received signals are cross-correlated with an undelayed replica to extract the pseudo delay, then differenced with the same calculation for another station and multiplied by the speed of propagation to create range differences.
Several methods have been implemented to avoid self-interference. A historic example is the British Decca system, developed during World War II. Decca used the phase -difference of three transmitters. Later, Omega elaborated on this principle. For Loran-C , introduced in the late 1950s, all transmitters broadcast pulses on the same frequency with different, small time delays. GNSSs continuously transmitting on the same carrier frequency modulated by different pseudo random codes (GPS, Galileo, revised GLONASS).
The TOT concept is illustrated in Figure 2 for the surveillance function and a planar scenario ( d = 2 {\displaystyle d=2} ). Aircraft A, at coordinates ( x A , y A ) {\displaystyle (x_{A},y_{A})} , broadcasts a pulse sequence at time t A {\displaystyle t_{A}} . The broadcast is received at stations S 1 {\displaystyle S_{1}} , S 2 {\displaystyle S_{2}} and S 3 {\displaystyle S_{3}} at times t 1 {\displaystyle t_{1}} , t 2 {\displaystyle t_{2}} and t 3 {\displaystyle t_{3}} respectively. Based on the three measured TOAs, the processing algorithm computes an estimate of the TOT t A {\displaystyle t_{A}} , from which the range between the aircraft and the stations can be calculated. The aircraft coordinates ( x A , y A ) {\displaystyle (x_{A},y_{A})} are then found.
When the algorithm computes the correct TOT, the three computed ranges have a common point of intersection which is the aircraft location (the solid-line circles in Figure 2). If the computed TOT is after the actual TOT, the computed ranges do not have a common point of intersection (dashed-line circles in Figure 2). It is clear that an iterative TOT algorithm can be found. In fact, GPS was developed using iterative TOT algorithms. Closed-form TOT algorithms were developed later.
TOT algorithms became important with the development of GPS. GLONASS and Galileo employ similar concepts. The primary complicating factor for all GNSSs is that the stations (transmitters on satellites) move continuously relative to the Earth. Thus, in order to compute its own position, a user's navigation receiver must know the satellites' locations at the time the information is broadcast in the receiver's time scale (which is used to measure the TOAs). To accomplish this: (1) satellite trajectories and TOTs in the satellites' time scales are included in broadcast messages; and (2) user receivers find the difference between their TOT and the satellite broadcast TOT (termed the clock bias or offset). GPS satellite clocks are synchronized to UTC (to within a published offset of a few seconds), as well as with each other. This enables GPS receivers to provide UTC time in addition to their position.
Consider an emitter (E in Figure 3) at an unknown location vector
which we wish to locate (surveillance problem). The source is within range of m = n + 1 {\displaystyle m=n+1} receivers at known locations
The subscript i {\displaystyle i} refers to any one of the receivers:
The distance ( R i {\displaystyle R_{i}} ) from the emitter to one of the receivers in terms of the coordinates is
R i = | P → i − E → | = ( x i − x ) 2 + ( y i − y ) 2 + ( z i − z ) 2 . {\displaystyle R_{i}=\left|{\vec {P}}_{i}-{\vec {E}}\right|={\sqrt {(x_{i}-x)^{2}+(y_{i}-y)^{2}+(z_{i}-z)^{2}}}.}
For some solution algorithms, the math is made easier by placing the origin at one of the receivers ( P 0 ), which makes its distance to the emitter
R 0 = ( 0 − x ) 2 + ( 0 − y ) 2 + ( 0 − z ) 2 = x 2 + y 2 + z 2 . {\displaystyle R_{0}={\sqrt {(0-x)^{2}+(0-y)^{2}+(0-z)^{2}}}={\sqrt {x^{2}+y^{2}+z^{2}}}.}
Low-frequency radio waves follow the curvature of the Earth ( great-circle paths) rather than straight lines. In this situation, equation 1 is not valid. Loran-C [ 7 ] and Omega [ 8 ] are examples of systems that use spherical ranges. When a spherical model for the Earth is satisfactory, the simplest expression for the central angle (sometimes termed the geocentric angle ) θ v i {\displaystyle \theta _{vi}} between vehicle v {\displaystyle v} and station i is
where latitudes are denoted by φ {\displaystyle \varphi } , and longitudes are denoted by λ {\displaystyle \lambda } . Alternative, better numerically behaved equivalent expressions can be found in great-circle navigation .
The distance R i {\displaystyle R_{i}} from the vehicle to station i is along a great circle will then be
where R E {\displaystyle R_{E}} is the assumed radius of the Earth , and θ v i {\displaystyle \theta _{vi}} is expressed in radians.
Prior to GNSSs, there was little value to determining the TOT (as known to the receiver) or its equivalent in the navigation context, the offset between the receiver and transmitter clocks. Moreover, when those systems were developed, computing resources were quite limited. Consequently, in those systems (e.g., Loran-C, Omega, Decca), receivers treated the TOT as a nuisance parameter and eliminated it by forming TDOA differences (hence were termed TDOA or range-difference systems). This simplified solution algorithms. Even if the TOT (in receiver time) was needed (e.g., to calculate vehicle velocity), TOT could be found from one TOA, the location of the associated station, and the computed vehicle location.
With the advent of GPS and subsequently other satellite navigation systems: (1) TOT as known to the user receiver provides necessary and useful information; and (2) computing power had increased significantly. GPS satellite clocks are synchronized not only with each other but also with Coordinated Universal Time (UTC) (with a published offset) and their locations are known relative to UTC. Thus, algorithms used for satellite navigation solve for the receiver position and its clock offset (equivalent to TOT) simultaneously. The receiver clock is then adjusted so its TOT matches the satellite TOT (which is known by the GPS message). By finding the clock offset, GNSS receivers are a source of time as well as position information. Computing the TOT is a practical difference between GNSSs and earlier TDOA multilateration systems, but is not a fundamental difference. To first order, the user position estimation errors are identical. [ 9 ]
Multilateration system governing equations – which are based on "distance" equals "propagation speed" times "time of flight" – assume that the energy wave propagation speed is constant and equal along all signal paths. This is equivalent to assuming that the propagation medium is homogeneous. However, that is not always sufficiently accurate; some paths may involve additional propagation delays due to inhomogeneities in the medium. Accordingly, to improve solution accuracy, some systems adjust measured TOAs to account for such propagation delays. Thus, space-based GNSS augmentation systems – e.g., Wide Area Augmentation System (WAAS) and European Geostationary Navigation Overlay Service (EGNOS) – provide TOA adjustments in real time to account for the ionosphere. Similarly, U.S. Government agencies used to provide adjustments to Loran-C measurements to account for soil conductivity variations.
Generally, using a direct (non-iterative) algorithm, m = d + 1 {\displaystyle m=d+1} measurement equations can be reduced to a single scalar nonlinear "solution equation" having one unknown variable (somewhat analogous to Gauss–Jordan elimination for linear equations) – e.g., a quadratic polynomial in one vehicle Cartesian coordinate. [ 10 ] The vehicle position and TOT then readily follow in sequence. When m = d {\displaystyle m=d} , the measurement equations generally have two solution sets (but sometimes four), only one of which is "correct" (yields the true TOT and vehicle position in the absence of measurement errors). The "incorrect" solution(s) to the solution equation do not correspond to the vehicle position and TOT and are either ambiguous (yield other vehicle positions which have the same measurements) or extraneous (do not provide vehicle positions which have the same measurements, but are the result of mathematical manipulations).
Without redundant measurements (i.e., m = d + 1 {\displaystyle m=d+1} ), all valid algorithms yield the same "correct" solution set (but perhaps one or more different sets of "incorrect" solutions). Of course, statistically larger measurement errors result in statistically larger errors in the correct computed vehicle coordinates and TOT. With redundant measurements (i.e., m > d + 1 {\displaystyle m>d+1} ), a loss function or cost function (also called an error function) is minimized (a quadratic loss function is common). With redundant measurements in the absence of measurement errors, the measurement equations usually have a unique solution. If measurement errors are present, different algorithms yield different "correct" solutions; some are statistically better than others.
There are multiple categories of multilateration algorithms, and some categories have multiple members. Perhaps the first factor that governs algorithm selection: Is an initial estimate of the user's position required (as do iterative algorithms ) or is it not? Direct (closed-form) algorithms estimate the user's position using only the measured TOAs and do not require an initial position estimate. A related factor governing algorithm selection: Is the algorithm readily automated, or conversely, is human interaction needed/expected? Most direct (closed form) algorithms have multiple solutions, which is detrimental to their automation. A third factor is: Does the algorithm function well with both the minimum number ( d + 1 {\displaystyle d+1} ) TOA measurements and with additional (redundant) measurements?
Direct algorithms can be further categorized based on energy wave propagation path—either straight-line or curved. The latter is applicable to low-frequency radio waves, which follow the earth's surface; the former applies to higher frequency (say, greater than one megahertz) and to shorter ranges (hundreds of miles).
This taxonomy has five categories: four for direct algorithms and one for iterative algorithms (which can be used with either d + 1 {\displaystyle d+1} or more measurements and either propagation path type). However, it appears that algorithms in only three of these categories have been implemented. When redundant measurements are available for either wave propagation path, iterative algorithms have been strongly favored over closed-form algorithms. [ 11 ] Often, real-time systems employ iterative algorithms while off-line studies utilize closed-form algorithms.
All multilateration algorithms assume that the station locations are known at the time each wave is transmitted. For TDOA systems, the stations are fixed to the earth and their locations are surveyed. For TOA systems, the satellites follow well-defined orbits and broadcast orbital information. (For navigation, the user receiver's clock must be synchronized with the transmitter clocks; this requires that the TOT be found.) Equation 3 is the hyperboloid described in the previous section, where 4 receivers (0 ≤ m ≤ 3) lead to 3 non-linear equations in 3 unknown Cartesian coordinates (x,y,z). The system must then solve for the unknown user (often, vehicle) location in real time. (A variation: air traffic control multilateration systems use the Mode C SSR transponder message to find an aircraft's altitude. Three or more receivers at known locations are used to find the other two dimensions — either (x,y) for an airport application, or latitude/longitude for off-airport applications.)
Steven Bancroft was apparently the first to publish a closed-form solution to the problem of locating a user (e.g., vehicle) in three dimensions and the common TOT using four or more TOA measurements. [ 12 ] Bancroft's algorithm, as do many, reduces the problem to the solution of a quadratic algebraic equation; its solution yields the three Cartesian coordinates of the receiver as well as the common signal TOT. Other, comparable solutions were subsequently developed. [ 13 ] [ 14 ] [ 15 ] [ 16 ] [ 17 ] Notably, all closed-form solutions were found a decade or more after the GPS program was initiated using iterative methods.
Closed-form solutions often involve squaring the distance or pseudo-range to avoid local linearization of a square root operation. However, this squaring alters noise statistics and can lead to suboptimal solutions. Typically, a two-step simplification is employed: first, solving a linear least squares problem neglecting spherical constraints (squared distance), and then finding the intersection with the constraint. This approach may suffer performance degradation in the presence of noise.
A more refined technique involves directly solving a "constrained least squares" problem, while also addressing modified noise statistics. While this method may not yield a closed-form solution and often necessitates iterative approaches, it offers significant advantages. By bypassing local linearization, it facilitates convergence to a global minimum without requiring an initial guess. Additionally, it tends to encounter fewer local minima and demonstrates increased accuracy, particularly in noisy environments.
The constrained least squares solution for TDOA systems was apparently initially proposed by Huang et al. [ 18 ] and further explored by subsequent researchers. [ 19 ] [ 20 ] [ 21 ] Similar methodologies were introduced for TOT systems [ 22 ] also illustrating how to convert a problem from TDOA to TOT by incorporating an additional equation and an unknown clock bias. The TOT solution outperforms the TDOA solution due to the latter's susceptibility to noise coloring, caused by the subtraction of reference station's TOA. Robust version such as the "constrained least absolute deviations" is also discussed and shows superior performance to least squares in scenarios involving non-Gaussian noise and contamination from outlier measurements.
The solution for the position of an aircraft having a known altitude using 3 TOA measurements requires solving a quartic (fourth-order) polynomial. [ 9 ] [ 23 ]
Multilateration systems and studies employing spherical-range measurements (e.g., Loran-C, Decca, Omega) utilized a variety of solution algorithms based on either iterative methods or spherical trigonometry. [ 24 ]
For Cartesian coordinates, when four TOAs are available and the TOT is needed, Bancroft's [ 12 ] or another closed-form (direct) algorithm are options, even if the stations are moving. When the four stations are stationary and the TOT is not needed, extension of Fang's algorithm (based on DTOAs) to three dimensions is an option. [ 9 ] Another option, and likely the most utilized in practice, is the iterative Gauss–Newton Nonlinear Least-Squares method. [ 11 ] [ 9 ]
Most closed-form algorithms reduce finding the user vehicle location from measured TOAs to the solution of a quadratic equation. One solution of the quadratic yields the user's location. The other solution is either ambiguous or extraneous – both can occur (which one depends upon the dimensions and the user location). Generally, eliminating the incorrect solution is not difficult for a human, but may require vehicle motion and/or information from another system. An alternative method used in some multilateration systems is to employ the Gauss–Newton NLLS method and require a redundant TOA when first establishing surveillance of a vehicle. Thereafter, only the minimum number of TOAs is required.
Satellite navigation systems such as GPS are the most prominent examples of 3-D multilateration. [ 3 ] [ 4 ] Wide Area Multilateration (WAM), a 3-D aircraft surveillance system, employs a combination of three or more TOA measurements and an aircraft altitude report.
For finding a user's location in a two dimensional (2-D) Cartesian geometry, one can adapt one of the many methods developed for 3-D geometry, most motivated by GPS—for example, Bancroft's [ 25 ] or Krause's. [ 14 ] Additionally, there are specialized TDOA algorithms for two-dimensions and stations at fixed locations — notable is Fang's method. [ 10 ]
A comparison of 2-D Cartesian algorithms for airport surface surveillance has been performed. [ 26 ] However, as in the 3-D situation, it is likely the most utilized algorithms are based on Gauss–Newton NLLS. [ 11 ] [ 9 ]
Examples of 2-D Cartesian multilateration systems are those used at major airports in many nations to surveil aircraft on the surface or at very low altitudes.
Razin [ 24 ] developed a closed-form algorithm for a spherical Earth. Williams and Last [ 27 ] extended Razin's solution to an osculating sphere Earth model.
When necessitated by the combination of vehicle-station distance (e.g., hundreds of miles or more) and required solution accuracy, the ellipsoidal shape of the Earth must be considered. This has been accomplished using the Gauss–Newton NLLS [ 28 ] method in conjunction with ellipsoid algorithms by Andoyer, [ 29 ] Vincenty [ 30 ] and Sodano. [ 31 ]
Examples of 2-D 'spherical' multilateration navigation systems that accounted for the ellipsoidal shape of the Earth are the Loran-C and Omega radionavigation systems, both of which were operated by groups of nations. Their Russian counterparts, CHAYKA and Alpha (respectively), are understood to operate similarly.
Consider a three-dimensional Cartesian scenario. Improving accuracy with a large number of receivers (say, n + 1 {\displaystyle n+1} , numbered 0 , 1 , 2 , … , n {\displaystyle 0,1,2,\dots ,n} ) can be a problem for devices with small embedded processors, because of the time required to solve several simultaneous, non-linear equations ( 1 , 2 , 3 ). The TDOA problem can be turned into a system of linear equations when there are three or more receivers, which can reduce the computation time. Starting with equation 3 , solve for R i {\displaystyle R_{i}} , square both sides, collect terms and divide all terms by c τ i = R i − R 0 {\displaystyle c\tau _{i}=R_{i}-R_{0}} :
R i 2 = ( c τ i + R 0 ) 2 , 0 = ( c τ i ) + 2 R 0 + R 0 2 − R i 2 c τ i . {\displaystyle {\begin{aligned}R_{i}^{2}&=(c\tau _{i}+R_{0})^{2},\\0&=(c\tau _{i})+2R_{0}+{\frac {R_{0}^{2}-R_{i}^{2}}{c\tau _{i}}}.\end{aligned}}}
Removing the 2 R 0 {\displaystyle 2R_{0}} term will eliminate all the square root terms. That is done by subtracting the TDOA equation of receiver i = 1 {\displaystyle i=1} from each of the others ( 2 ≤ m ≤ n {\displaystyle 2\leq m\leq n} )
0 = c τ i − c τ 1 + R 0 2 − R i 2 c τ i − R 0 2 − R 1 2 c τ 1 . {\displaystyle 0=c\tau _{i}-c\tau _{1}+{\frac {R_{0}^{2}-R_{i}^{2}}{c\tau _{i}}}-{\frac {R_{0}^{2}-R_{1}^{2}}{c\tau _{1}}}.}
Focus for a moment on equation 1 . Square R 0 {\displaystyle R_{0}} , group similar terms and use equation 2 to replace some of the terms with R 0 {\displaystyle R_{0}} .
R 0 2 − R i 2 = − x i 2 − y i 2 − z i 2 + x 2 x i + y 2 y i + z 2 z i . {\displaystyle R_{0}^{2}-R_{i}^{2}=-x_{i}^{2}-y_{i}^{2}-z_{i}^{2}+x\,2x_{i}+y\,2y_{i}+z\,2z_{i}.}
Combine equations 5 and 6 , and write as a set of linear equations (for 2 ≤ i ≤ n {\displaystyle 2\leq i\leq n} ) of the unknown emitter location x , y , z {\displaystyle x,y,z}
0 = x A i + y B i + z C i + D i , A i = 2 x i c τ i − 2 x 1 c τ 1 , B i = 2 y i c τ i − 2 y 1 c τ 1 , C i = 2 z i c τ i − 2 z 1 c τ 1 , D i = c τ i − c τ 1 − x i 2 + y i 2 + z i 2 c τ i + x 1 2 + y 1 2 + z 1 2 c τ 1 . {\displaystyle {\begin{aligned}0&=xA_{i}+yB_{i}+zC_{i}+D_{i},\\A_{i}&={\frac {2x_{i}}{c\tau _{i}}}-{\frac {2x_{1}}{c\tau _{1}}},\\B_{i}&={\frac {2y_{i}}{c\tau _{i}}}-{\frac {2y_{1}}{c\tau _{1}}},\\C_{i}&={\frac {2z_{i}}{c\tau _{i}}}-{\frac {2z_{1}}{c\tau _{1}}},\\D_{i}&=c\tau _{i}-c\tau _{1}-{\frac {x_{i}^{2}+y_{i}^{2}+z_{i}^{2}}{c\tau _{i}}}+{\frac {x_{1}^{2}+y_{1}^{2}+z_{1}^{2}}{c\tau _{1}}}.\end{aligned}}}
Use equation 7 to generate the four constants A i , B i , C i , D i {\displaystyle A_{i},B_{i},C_{i},D_{i}} from measured distances and time for each receiver 2 ≤ i ≤ n {\displaystyle 2\leq i\leq n} . This will be a set of n − 1 {\displaystyle n-1} inhomogeneous linear equations.
There are many robust linear algebra methods that can solve for ( x , y , z ) {\displaystyle (x,y,z)} , such as Gaussian elimination . Chapter 15 in Numerical Recipes [ 32 ] describes several methods to solve linear equations and estimate the uncertainty of the resulting values.
The defining characteristic and major disadvantage of iterative methods is that a 'reasonably accurate' initial estimate of the 'vehicle's' location is required. If the initial estimate is not sufficiently close to the solution, the method may not converge or may converge to an ambiguous or extraneous solution. However, iterative methods have several advantages: [ 11 ]
Many real-time multilateration systems provide a rapid sequence of user's position solutions — e.g., GPS receivers typically provide solutions at 1 sec intervals. Almost always, such systems implement: (a) a transient 'acquisition' (surveillance) or 'cold start' (navigation) mode, whereby the user's location is found from the current measurements only; and (b) a steady-state 'track' (surveillance) or 'warm start' (navigation) mode, whereby the user's previously computed location is updated based current measurements (rendering moot the major disadvantage of iterative methods). Often the two modes employ different algorithms and/or have different measurement requirements, with (a) being more demanding. The iterative Gauss-Newton algorithm is often used for (b) and may be used for both modes.
When there are more TOA measurements than the d + 1 {\displaystyle d+1} unknown quantities – e.g., 5 or more GPS satellite TOAs – the iterative Gauss–Newton algorithm for solving non-linear least squares (NLLS) problems is often preferred. Except for pathological station locations, an over-determined situation eliminates possible ambiguous and/or extraneous solutions that can occur when only the minimum number of TOA measurements are available. Another important advantage of the Gauss–Newton method over some closed-form algorithms is that it treats measurement errors linearly, which is often their nature, thereby reducing the effect measurement errors by averaging. The Gauss–Newton method may also be used with the minimum number of measurements.
While the Gauss-Newton NLLS iterative algorithm is widely used in operational systems (e.g., ASDE-X), the Nelder-Mead iterative method is also available. Example code for the latter, for both TOA and TDOA systems, are available. [ 33 ]
Multilateration is often more accurate for locating an object than true-range multilateration or multiangulation, as (a) it is inherently difficult and/or expensive to accurately measure the true range (distance) between a moving vehicle and a station, particularly over large distances, and (b) accurate angle measurements require large antennas which are costly and difficult to site.
Accuracy of a multilateration system is a function of several factors, including:
The accuracy can be calculated by using the Cramér–Rao bound and taking account of the above factors in its formulation. Additionally, a configuration of the sensors that minimizes a metric obtained from the Cramér–Rao bound can be chosen so as to optimize the actual position estimation of the target in a region of interest. [ 6 ]
Concerning the first issue (user-station geometry), planning a multilateration system often involves a dilution of precision (DOP) analysis to inform decisions on the number and location of the stations and the system's service area (two dimensions) or volume (three dimensions). In a DOP analysis, the TOA measurement errors are assumed to be statistically independent and identically distributed. This reasonable assumption separates the effects of user-station geometry and TOA measurement errors on the error in the calculated user position. [ 2 ] [ 34 ]
Multilateration requires that spatially separated stations – either transmitters (navigation) or receivers (surveillance) – have synchronized 'clocks'. There are two distinct synchronization requirements: (1) maintain synchronization accuracy continuously over the life expectancy of the system equipment involved (e.g., 25 years); and (2) for surveillance, accurately measure the time interval between TOAs for each 'vehicle' transmission. Requirement (1) is transparent to the user, but is an important system design consideration. To maintain synchronization, station clocks must be synchronized or reset regularly (e.g., every half-day for GPS, every few minutes for ASDE-X). Often the system accuracy is monitored continuously by "users" at known locations - e.g., GPS has five monitor sites.
Multiple methods have been used for station synchronization. Typically, the method is selected based on the distance between stations. In approximate order of increasing distance, methods have included:
While the performance of all navigation and surveillance systems depends upon the user's location relative to the stations, multilateration systems are more sensitive to the user-station geometry than are most systems. To illustrate, consider a hypothetical two-station surveillance system that monitors the location of a railroad locomotive along a straight stretch of track—a one dimensional situation ( d = 1 ) {\displaystyle (d=1)} . The locomotive carries a transmitter and the track is straight in both directions beyond the stretch that's monitored. For convenience, let the system origin be mid-way between the stations; then T D O A = 0 {\displaystyle TDOA=0} occurs at the origin.
Such a system would work well when a locomotive is between the two stations. When in motion, a locomotive moves directly toward one station and directly away from the other. If a locomotive is distance Δ {\displaystyle \Delta } away from the origin, in the absence of measurement errors, the TDOA would be T D O A = ± 2 Δ / c {\displaystyle TDOA=\pm 2\,\Delta /\,c} (where c {\displaystyle c} is the known wave propagation speed). Thus, (ignoring the scale-factor c {\displaystyle c} ) the amount of displacement is doubled in the TDOA. If true ranges were measured instead of pseudo-ranges, the measurement difference would be identical.
However, this one-dimensional pseudo-range system would not work at all when a locomotive is not between the two stations. In either extension region, if a locomotive moves between two transmissions, necessarily away from both stations, the TDOA would not change. In the absence of errors, the changes in the two TOAs would perfectly cancel in forming the TDOA. In the extension regions, the system would always indicate that a locomotive was at the nearer station, regardless of its actual position. In contrast, a system that measures true ranges would function in the extension regions exactly as it does when the locomotive is between the stations. This one-dimensional system provides an extreme example of a multilateration system's service area.
In a multi-dimensional (i.e., d = 2 {\displaystyle d=2} or d = 3 {\displaystyle d=3} ) situation, the measurement extremes of a one-dimensional scenario rarely occur. When it is within the perimeter enclosing the stations, a vehicle usually moves partially away from some stations and partially toward other stations. It is highly unlikely to move directly toward any one station and simultaneously directly away from another; moreover, it cannot move directly toward or away from all stations at the same time. Simply put, inside the stations' perimeter, consecutive TDOAs will typically amplify but not double vehicle movement Δ {\displaystyle \Delta } which occurred during that interval—i.e., ( Δ / c < | T D O A i + 1 − T D O A i | < 2 Δ / c ) {\displaystyle (\Delta /\,c<|TDOA_{i+1}-TDOA_{i}|<2\,\Delta /\,c)} . Conversely, outside the perimeter, consecutive TDOAs will typically attenuate but not cancel associated vehicle movement—i.e., ( 0 < | T D O A i + 1 − T D O A i | < Δ / c ) {\displaystyle (0<|TDOA_{i+1}-TDOA_{i}|<\Delta /\,c)} . The amount of amplification or attenuation will depend upon the vehicle's location. The system's performance, averaged over all directions, varies continuously as a function of user location.
When analyzing a 2D or 3D multilateration system, dilution of precision (DOP) is usually employed to quantify the effect of user-station geometry on position-determination accuracy. [ 36 ] The basic DOP metric is
The symbol ? {\displaystyle ?} conveys the notion that there are multiple "flavors" of DOP – the choice depends upon the number of spatial dimensions involved and whether the error for the TOT solution is included in the metric. The same distance units must be used in the numerator and denominator of this fraction – e.g., meters. ?DOP is a dimensionless factor that is usually greater than one, but is independent of the pseudo-range (PR) measurement error. (When redundant stations are involved, it is possible to have 0 < ?DOP < 1.) HDOP is usually employed (? = H, and XXX = horizontal position) when interest is focused on a vehicle position on a plane.
Pseudo-range errors are assumed to add to the measured TOAs, be Gaussian-distributed, have zero mean (average value) and have the same standard deviation σ P R {\displaystyle \sigma _{PR}} regardless of vehicle location or the station involved. Labeling the orthogonal axes in the plane as x {\displaystyle x} and y {\displaystyle y} , the horizontal position error is characterized statistically as
Mathematically, each DOP "flavor" is a different sensitivity ("derivative") of a solution quantity (e.g., horizontal position) standard deviation with respect to the pseudo-range error standard deviation. (Roughly, DOP corresponds to the condition Δ → 0 {\displaystyle \Delta \to 0} .) That is, ?DOP is the rate of change of the standard deviation of a solution quantity from its correct value due to measurement errors – assuming that a linearized least squares algorithm is used. (It is also the smallest variance for any algorithm. [ 37 ] ) Specifically, HDOP is the sensitivity ("derivative") of the user's horizontal position standard deviation (i.e., its sensitivity) to the pseudo-range error standard deviation.
For three stations, multilateration accuracy is quite good within almost the entire triangle enclosing the stations—say, 1 < HDOP < 1.5 and is close to the HDOP for true ranging measurements using the same stations. However, a multilateration system's HDOP degrades rapidly for locations outside the station perimeter. Figure 5 illustrates the approximate service area of two-dimensional multilateration system having three stations forming an equilateral triangle. The stations are M – U – V . BLU denotes baseline unit (station separation B {\displaystyle B} ). The inner circle is more "conservative" and corresponds to a "cold start" (no knowledge of vehicle's initial position). The outer circle is more typical, and corresponds to starting from a known location. The axes are normalized by the separation between stations.
Figure 6 shows the HDOP contours for the same multilateration system. The minimum HDOP, 1.155, occurs at the center of the triangle formed by the stations (and would be the same value for true range measurements). Beginning with HDOP = 1.25, the contours shown follow a factor-of-2 progression. Their roughly equal spacing (outside of the three V-shaped areas between the baseline extensions) is consistent with the rapid growth of the horizontal position error with distance from the stations. The system's HDOP behavior is qualitatively different in the three V-shaped areas between the baseline extensions. HDOP is infinite along the baseline extensions, and is significantly larger in these area. (HDOP is mathematically undefined at the stations; hence multiple DOP contours can terminate on a station.) A three-station system should not be used between the baseline extensions.
For locations outside the stations' perimeter, a multilateration system should typically be used only near the center of the closest baseline connecting two stations (two dimensional planar situation) or near the center of the closest plane containing three stations (three dimensional situation). Additionally, a multilateration system should only be employed for user locations that are a fraction of an average baseline length (e.g., less than 25%) from the closest baseline or plane. For example:
When more than the required minimum number of stations are available (often the case for a GPS user), HDOP can be improved (reduced). However, limitations on use of the system outside the polygonal station perimeter largely remain. Of course, the processing system (e.g., GPS receiver) must be able to utilize the additional TOAs. This is not an issue today, but has been a limitation in the past.
Pseudo-range multilateration systems have been developed for waves that follow straight-line and curved earth trajectories and virtually every wave phenomena—electromagnetic (various frequencies and waveforms), acoustic (audible or ultrasound, in water or air), seismic, etc. The multilateration technique was apparently first used during World War I to locate the source of artillery fire using audible sound waves (TDOA surveillance). Multilateration surveillance is related to passive towed array sonar target localization (but not identification), which was also first used during World War I.
Longer distance radio-based navigation systems became viable during World War II , with the advancement of radio technologies. For about 1950–2000, TDOA multilateration was a common technique in Earth-fixed radio navigation systems, where it was known as hyperbolic navigation . These systems are relatively undemanding of the user receiver, as its "clock" can have low performance/cost and is usually unsynchronized with station time. [ 38 ] The difference in received signal timing can even be measured visibly using an oscilloscope . The introduction of the microprocessor greatly simplified operation, increasing popularity during the 1980s. The most popular TDOA hyperbolic navigation system was Loran-C , which was used around the world until the system was largely shut down.
The development of atomic clocks for synchronizing widely separated stations was instrumental in the development of the GPS and other GNSSs. The widespread use of satellite navigation systems like the Global Positioning System (GPS) have made Earth-fixed TDOA navigation systems largely redundant, and most have been decommissioned. Owing to its high accuracy at low cost of user equipage, today multilateration is the concept most often selected for new navigation and surveillance systems—e.g., surveillance of flying (alternative to radar) and taxiing (alternative to visual) aircraft. [ 39 ] [ 40 ] [ 41 ]
Multilateration is commonly used in civil and military applications to either (a) locate a vehicle (aircraft, ship, car/truck/bus or wireless phone carrier) by measuring the TOAs of a signal from the vehicle at multiple stations having known coordinates and synchronized "clocks" (surveillance application) or (b) enable the vehicle to locate itself relative to multiple transmitters (stations) at known locations and having synchronized clocks based on measurements of signal TOAs (navigation application). When the stations are fixed to the earth and do not provide time, the measured TOAs are almost always used to form one fewer TDOAs.
For vehicles, surveillance or navigation stations (including required associated infrastructure) are often provided by government agencies. However, privately funded entities have also been (and are) station/system providers – e.g., wireless phone providers. [ 42 ] Multilateration is also used by the scientific and military communities for non-cooperative surveillance.
The following table summarizes the advantages and disadvantages of pseudo-range multilateration, particularly relative to true-range measurements.
The advantages of systems employing pseudo-ranges largely benefit the vehicle/user/target. The disadvantages largely burden the system provider.
The following is a list of example applications: | https://en.wikipedia.org/wiki/Pseudo-range_multilateration |
Pseudo-response regulator ( PRR ) refers to a group of genes that regulate the circadian oscillator in plants. There are four primary PRR proteins (PRR9, PRR7, PRR5 and TOC1 /PRR1) that perform the majority of interactions with other proteins within the circadian oscillator, and another (PRR3) that has limited function. These genes are all paralogs of each other, and all repress the transcription of Circadian Clock Associated 1 ( CCA1 ) and Late Elongated Hypocotyl (LHY) at various times throughout the day. The expression of PRR9, PRR7, PRR5 and TOC1 /PRR1 peak around morning, mid-day, afternoon and evening, respectively. As a group, these genes are one part of the three-part repressilator system that governs the biological clock in plants.
Multiple labs identified the PRR genes as parts of the circadian clock in the 1990s. In 2000, Akinori Matsushika, Seiya Makino, Masaya Kojima, and Takeshi Mizuno were the first to understand PRR genes as pseudo-response repressor genes rather than as response regulator (ARR) genes . [ 1 ] [ 2 ] The factor that distinguishes PRR from ARR genes is the lack of a phospho-accepting aspartate site that characterizes ARR proteins. Though their research that discovered PRR genes was primarily hailed during the early 2000s as informing the scientific community about the function of TOC1 (named APRR1 by the Mizuno lab), an additional pseudo-response regulator in the Arabidopsis thaliana biological clock, [ 3 ] the information about PRR genes that Matsushika and his team found deepened scientific understanding of circadian clocks in plants and led other researchers to hypothesize about the purpose of the PRR genes. [ 1 ] Though current research has identified TOC1, PRR3, PRR5, PRR7, and PRR9 as of importance to the A. thaliana circadian clock mechanism, Matsushika et al. first categorized PRR genes into two subgroups (APRR1 and APRR2, the A stands for Arabidopsis) due to two differing amino acid structures. [ 4 ] The negative feedback loops including PRR genes, proposed by Mizuno, were incorporated into a complex repressilator circuit by Andrew Millar ’s lab in 2012. [ 5 ] The conception of the plant biological clock as made up of interacting negative feedback loops is unique in comparison to mammal and fungal circadian clocks which contain autoregulatory negative feedback loops with positive and negative elements [ 6 ] (see "Transcriptional and non-transcriptional control on the Circadian clock page).
PRR3, PRR5, PRR7 and PRR9 participate in the repressilator of a negative autoregulatory feedback loop that synchronizes to environmental inputs. The repressilator has a morning, evening, and night loop that are regulated in part by the pseudo-response regulator proteins' interactions with CCA1 and LHY. CCA1 and LHY exhibit peak binding to PRR9, PRR7, and PRR5 in the morning, evening, and night, respectively. [ 7 ]
When phosphorylated by an unknown kinase , PRR5 and PRR3 proteins demonstrate increased binding to TIMING OF CAB2 EXPRESSION 1 ( TOC1 ). This interaction stabilizes both TOC1 and PRR5 and prevents their degradation by the F-box protein ZEITLUPE (ZTL). [ 7 ] Through this mechanism, PRR5 is indirectly activated by light, as ZTL is inhibited by light. Additionally, PRR5 contributes to the transcriptional repression of the genes encoding the single MYB transcription factors CCA1 and LHY. [ 7 ]
Two single MYB transcription factors, CCA1 and LHY, activate expression of PRR7 and PRR9 . In turn, PRR7 and PRR9 repress CCA1 and LHY through the binding of their promoters. This interaction forms the morning loop of the repressilator of the biological clock in A. thaliana . [ 7 ] Chromatin immunoprecipitation demonstrates that LUX binds to the PRR9 promoter to repress it. Additionally, ELF3 has been shown to activate PRR9 and repress CCA1 and LHY . [ 7 ] PRR9 is also activated by alternative RNA splicing . When PRMT5 (a methylation factor) is prevented from methylating intron 2 of PRR9, a frameshift resulting in premature truncation occurs. [ 7 ]
PRR7 and PRR9 also play a role in the entrainment of A. thaliana to a temperature cycle. Double-mutant plants with inactivated PRR7 and PRR9 exhibit extreme period lengthening at high temperatures but show no change in period at low temperatures. However, the inactivation of CCA1 and LHY in the PRR7/PRR9 loss-of-function mutants shows no change in period at high temperatures—this suggests that PRR7 and PRR9 are acting by overcompensation. [ 7 ]
In A. thaliana , the main feedback loop is proposed to involve a transcriptional regulation between several proteins. The three main components of this loop are TOC1 (also known as PRR1), CCA1 and LHY. [ 8 ] Each individual component peaks in transcriptions at different times of day. [ 9 ] PRR 9, 7 and 5 each significantly reduce the transcription levels of CCA1 and LHY. [ 9 ] In the opposite manner, PRR 9 and 7 slightly increase the transcription levels of TOC1. [ 9 ] The Constans (CO) is also indirectly regulated by the PRR proteins as well by setting up the molecular mechanism to dictate the photosensitive period in the afternoon. [ 10 ] PRRs are also known to stabilize CO at certain times of day to mediate its accumulation. [ 11 ] This results in the regulation of early flowering in shorter photoperiods , making light sensitivity and control of flowering time important functions of the PRR class. [ 10 ]
PPR3, PRR5, PRR7, and PRR9 are all paralogs of each other. They have similar structure, and all repress the transcription of CCA1 and LHY. Additionally, they are all characterized by their lack of a phospho-accepting aspartate site. These genes are also paralogs to TOC1, which is alternatively called PRR1. [ 7 ]
Several pseudo-response regulators have been found in Selaginella, but their function has not yet been explored. [ 12 ]
As PRR is a family of genes, several rounds mutant screening have been performed to identify each possible phenotype.
In regards to rhythmicity of the clock in a free running setting PRR9 and PRR5 are associated with longer and shorter periods respectively. [ 9 ] For each gene, the double mutant with PRR7 exacerbates observed trends in rhythmicity. [ 9 ] The triple mutant renders the plant arrhythmic. [ 9 ]
In terms of flowering time in long day conditions, all mutants made the observed flowering late, with PRR7 significantly more late in comparison to the other mutants. [ 9 ] All double mutants with PRR7 saw much later flowering time than the PRR5/PRR9 mutant. [ 9 ]
With regard to light sensitivity, particularly in red light which is associated with hypocotyl lengthening, all PRR mutants were observed to be hypo-sensitive with PRR9 showing to be less sensitive. [ 9 ] All the double mutants were equal in hyposensitivity as the PRR5 or PRR7 mutants; the triple mutant is extremely hypo-sensitive. [ 9 ]
Recent research has shown that expression of clock genes show tissue-specificity. [ 13 ] Learning about how, when, and why specific tissues show certain peaks in clock genes like PRR can reveal more about the subtle nuances of each gene within the repressilator.
Few investigations into the circadian oscillator mechanisms in species other than A. thaliana have taken place; learning which genes are responsible for clock functions in other species will give more insight into the similarities and differences in clocks across plant species. [ 14 ]
The mechanistic details of each step in the plant biological clock repressilator system have yet to be fully understood. An understanding of these will give knowledge of clock function and, across species, increase understanding of the ecological and evolutionary functions of circadian oscillators. [ 7 ]
Additionally, identifying direct targets of PRR5, PRR7 and PRR9 that are not CCA1 and LHY will provide information about the molecular links from the PRRs to output genes like the flowering pathway and metabolism in mitochondria, which are CCA1-independent. [ 9 ] | https://en.wikipedia.org/wiki/Pseudo-response_regulator |
In mathematics , and more specifically in abstract algebra , a pseudo-ring is one of the following variants of a ring :
None of these definitions are equivalent, so it is best [ editorializing ] to avoid the term "pseudo-ring" or to clarify which meaning is intended.
This algebra -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pseudo-ring |
The pseudo Jahn–Teller effect (PJTE), occasionally also known as second-order JTE , is a direct extension of the Jahn–Teller effect (JTE) where spontaneous symmetry breaking in polyatomic systems ( molecules and solids ) occurs even when the relevant electronic states are not degenerate .
The PJTE can occur under the influence of sufficiently low-lying electronic excited states of appropriate symmetry.
"The pseudo Jahn–Teller effect is the only source of instability and distortions of high-symmetry configurations of polyatomic systems in nondegenerate states, and it contributes significantly to the instability in degenerate states". [ 1 ]
In their early 1957 paper on what is now called pseudo Jahn–Teller effect (PJTE), Öpik and Pryce [ 2 ] showed that a small splitting of the degenerate electronic term does not necessarily remove the instability and distortion of a polyatomic system induced by the Jahn–Teller effect (JTE), provided that the splitting is sufficiently small (the two split states remain "pseudo degenerate"), and the vibronic coupling between them is strong enough. From another perspective, the idea of a "mix" of different electronic states induced by low-symmetry vibrations was introduced in 1933 by Herzberg and Teller [ 3 ] to explore forbidden electronic transitions, and extended in the late 1950s by Murrell and Pople [ 4 ] and by Liehr. [ 5 ] The role of excited states in softening the ground state with respect to distortions in benzene was demonstrated qualitatively by Longuet-Higgins and Salem [ 6 ] by analyzing the π electron levels in the Hückel approximation , while a general second-order perturbation formula for such vibronic softening was derived by Bader in 1960. [ 7 ] In 1961 Fulton and Gouterman [ 8 ] presented a symmetry analysis of the two-level case in dimers and introduced the term "pseudo Jahn–Teller effect". The first application of the PJTE to solving a major solid-state structural problem with regard to the origin of ferroelectricity was published in 1966 by Isaac Bersuker , [ 9 ] and the first book on the JTE covering the PJTE was published in 1972 by Englman. [ 10 ] The second-order perturbation approach was employed by Pearson in 1975 to predict instabilities and distortions in molecular systems; [ 11 ] he called it "second-order JTE" (SOJTE). The first explanation of PJT origin of puckering distortion as due to the vibronic coupling to the excited state, was given for the N 3 H 3 2+ radical by Borden, Davidson, and Feller in 1980 [ 12 ] (they called it "pyramidalization"). Methods of numerical calculation of the PJT vibronic coupling effect with applications to spectroscopic problems were developed in the early 1980s [ 13 ] A significant step forward in this field was achieved in 1984 when it was shown by numerical calculations [ 14 ] that the energy gap to the active excited state may not be the ultimate limiting factor in the PJTE, as there are two other compensating parameters in the condition of instability. It was also shown that, in extension of the initial definition, [ 2 ] the PJT interacting electronic states are not necessarily components emerging from the same symmetry type (as in the split degenerate term). As a result, the applicability of the PJTE became a priory unlimited. Moreover, it was shown by Bersuker that the PJTE is the only source of instability of high-symmetry configurations of polyatomic systems in nondegenerate states (works cited in Refs. [ 1 ] [ 15 ] [ 16 ] ), and degeneracy and pseudo degeneracy are the only source of spontaneous symmetry breaking in matter in all its forms. [ 17 ] The many applications of the PJTE to the study of a variety of properties of molecular systems and solids are reflected in a number of reviews and books [ 1 ] [ 10 ] [ 11 ] [ 15 ] [ 16 ] [ 17 ] [ 18 ] [ 19 ] ), as well as in proceedings of conferences on the JTE.
The equilibrium geometry of any polyatomic system in nondegenerate states is defined as corresponding to the point of the minimum of the adiabatic potential energy surface (APES), where its first derivatives are zero and the second derivatives are positive. If we denote the energy of the system as a function of normal displacements Q α {\displaystyle Q_{\alpha }} as E ( Q α ) {\displaystyle E(Q_{\alpha })} , at the minimum point of the APES ( Q α = 0 {\displaystyle Q_{\alpha }=0} ), the curvature K {\displaystyle K} of E ( Q α ) {\displaystyle E(Q_{\alpha })} in direction Q {\displaystyle Q} ,
K = ( d 2 E d Q α 2 ) 0 {\displaystyle K=\left({\frac {d^{2}E}{dQ_{\alpha }^{2}}}\right)_{0}} (1)
is positive, i.e., K > 0 {\displaystyle K>0} . Very often the geometry of the system at this point of equilibrium on the APES does not coincide with the highest possible (or even with any high) symmetry expected from general symmetry considerations. For instance, linear molecules are bent at equilibrium, planar molecules are puckered, octahedral complexes are elongated, or compressed, or tilted, cubic crystals are tetragonally polarized (or have several structural phases), etc. The PJTE is the general driving force of all these distortions if they occur in the nondegenerate electronic states of the high-symmetry (reference) geometry. If at the reference configuration the system is structurally unstable with respect to some nuclear displacements Q α {\displaystyle Q_{\alpha }} , then K < 0 {\displaystyle K<0} in this direction. The general formula for the energy is E = ⟨ ψ 0 | H | ψ 0 ⟩ {\displaystyle E=\langle \psi _{0}|H|\psi _{0}\rangle } , where H {\displaystyle H} is the Hamiltonian and ψ 0 {\displaystyle {\ce {\psi_0}}} is the wavefunction of the nondegenerate ground state. Substituting E {\displaystyle E} in Eq. (1), we get (omitting the α {\displaystyle \alpha } index for simplicity) [ 1 ]
K = K 0 + K v {\displaystyle K=K_{0}+K_{v}} (2)
K 0 = ⟨ ψ 0 | ( d 2 H d Q 2 ) 0 | ψ 0 ⟩ {\displaystyle K_{0}=\left\langle \psi _{0}{\Biggr |}\left({\frac {d^{2}H}{dQ^{2}}}\right)_{0}{\Biggr |}\psi _{0}\right\rangle } (3)
K v = − ∑ n | ⟨ ψ n | ( d H / d Q ) 0 | ψ n ⟩ | 2 E n − E 0 {\displaystyle K_{v}=-\sum _{n}{\frac {|\langle \psi _{n}|(\operatorname {d} H/\operatorname {d} Q)_{0}|\psi _{n}\rangle |^{2}}{E_{n}-E_{0}}}} (4)
where ψ n {\displaystyle \psi _{n}} are the wavefunctions of the excited states, and the K v {\displaystyle K_{v}} expression, obtained as a second order perturbation correction, is always negative, K v < 0 {\displaystyle K_{v}<0} . Therefore, if K 0 > 0 {\displaystyle K_{0}>0} , the K v {\displaystyle K_{v}} contribution is the only source of instability. The matrix elements in Eq. (4) are off-diagonal vibronic coupling constants, [ 10 ] [ 15 ] [ 16 ]
F 0 n = ⟨ ψ 0 | ( d H / d Q ) 0 | ψ n ⟩ {\displaystyle F_{0n}=\langle \psi _{0}|(\operatorname {d} \!H/\operatorname {d} \!Q)_{0}|\psi _{n}\rangle } (5)
These measure the mixing of the ground and excited states under the nuclear displacements Q {\displaystyle Q} , and therefore K v {\displaystyle K_{v}} is termed the vibronic contribution. Together with the K 0 {\displaystyle K_{0}} value and the energy gap 2 Δ 0 n = E n − E 0 {\displaystyle 2\Delta _{0n}=E_{n}-E_{0}} between the mixing states, F 0 n {\displaystyle F_{0n}} are the main parameters of the PJTE (see below).
In a series of papers beginning in 1980 (see references in [ 1 ] [ 15 ] [ 16 ] ) it was proved that for any polyatomic system in the high-symmetry configuration
K 0 > 0 {\displaystyle K_{0}>0} (6)
and hence the vibronic contribution is the only source of instability of any polyatomic system in nondegenerate states. If K 0 > 0 {\displaystyle K_{0}>0} for the high-symmetry configuration of any polyatomic system, then a negative curvature, K = ( K 0 + K v ) < 0 {\displaystyle K=(K_{0}+K_{v})<0} , can be achieved only due to the negative vibronic coupling component K v {\displaystyle K_{v}} , and only if | K v | > K 0 {\displaystyle |K_{v}|>K_{0}} . It follows that any distortion of the high-symmetry configuration is due to, and only to the mixing of its ground state with excited electronic states by the distortive nuclear displacements realized via the vibronic coupling in Eq. (5). The latter softens the system with respect to certain nuclear displacements ( K v < 0 {\displaystyle K_{v}<0} ), and if this softening is larger than the original (nonvibronic) hardness K 0 {\displaystyle K_{0}} in this direction, the system becomes unstable with respect to the distortions under consideration, leading to its equilibrium geometry of lower symmetry, or to dissociation.
There are many cases when neither the ground state is degenerate, nor is there a significant vibronic coupling to the lowest excited states to realize PJTE instability of the high-symmetry configuration of the system, and still there is a ground state equilibrium configuration with lower symmetry. In such cases the symmetry breaking is produced by a hidden PJTE (similar to a hidden JTE); it takes place due to a strong PJTE mixing of two excited states, one of which crosses the ground state to create a new (lower) minimum of the APES with a distorted configuration. [ 1 ]
The use of the second order perturbation correction, Eq. (4), for the calculation of the K v {\displaystyle K_{v}} value in the case of PJTE instability is incorrect because in this case | K v | > K 0 {\displaystyle |K_{v}|>K_{0}} , meaning the first perturbation correction is larger than the main term, and hence the criterion of applicability of the perturbation theory in its simplest form does not hold. In this case, we should consider the contribution of the lowest excited states (that make the total curvature negative) in a pseudo degenerate problem of perturbation theory. For the simplest case when only one excited state creates the main instability of the ground state, we can treat the problem via a pseudo degenerate two-level problem, including the contribution of the higher, weaker-influencing states as a second order correction. [ 1 ] In the PJTE two-level problem we have two electronic states of the high-symmetry configuration, ground β {\displaystyle \beta } and excited γ {\displaystyle \gamma } , separated by an energy interval of 2 Δ {\displaystyle 2\Delta } , that become mixed under nuclear displacements of certain symmetry Q = Q α {\displaystyle Q=Q_{\alpha }} ; the denotations α {\displaystyle \alpha } , β {\displaystyle \beta } , and γ {\displaystyle \gamma } indicate, respectively, the irreducible representations to which the symmetry coordinate and the two states belong. In essence, this is the original formulation of the PJTE. Assuming that the excited state is sufficiently close to the ground one, the vibronic coupling between them should be treated as a perturbation problem for two near-degenerate states. With both interacting states non-degenerate the vibronic coupling constant F {\displaystyle F} in Eq. (5) (omitting indices) is non-zero for only one coordinate Q = Q α {\displaystyle Q=Q_{\alpha }} with α = β γ {\displaystyle \alpha =\beta \gamma } . This gives us directly the symmetry of the direction of softening and possible distortion of the ground state. Assuming that the primary force constants K 0 {\displaystyle K_{0}} in the two states are the same (for different K 0 {\displaystyle K_{0}} see [1]), we get a 2×2 secular equation with the following solution for the energies ε ± {\displaystyle \varepsilon _{\pm }} of the two states interacting under the linear vibronic coupling (energy is referred to the middle of the 2 Δ {\displaystyle 2\Delta } gap between the levels at the undistorted geometry):
ε ± = 1 2 Q 2 ± ( Δ 2 + F 2 Q 2 ) 1 / 2 {\displaystyle \varepsilon _{\pm }={\frac {1}{2}}Q^{2}\pm (\Delta ^{2}+F^{2}Q^{2})^{1/2}} (7)
It is seen from these expressions that, on taking into account the vibronic coupling, F ≠ 0 {\displaystyle F\neq 0} , the two APES curves change in different ways: in the upper sheet the curvature (the coefficient at Q 2 {\displaystyle Q^{2}} in the expansion on Q {\displaystyle Q} ) increases, whereas in the lower one it decreases. But until F 2 / K 0 < Δ {\displaystyle F^{2}/K_{0}<\Delta } the minima of both states correspond to the point Q = 0 {\displaystyle Q=0} , as in the absence of vibronic mixing. However, if
F 2 K 0 > Δ {\displaystyle {\frac {F^{2}}{K_{0}}}>\Delta } (8)
the curvature of the lower curve of the APES becomes negative, and the system is unstable with respect to the Q {\displaystyle Q} displacements (Fig. 1). Under condition (8), the minima points on the APES are given by
± Q 0 = ( F 2 K 0 2 − Δ 2 F 2 ) 1 / 2 {\displaystyle \pm Q_{0}=\left({\frac {F^{2}}{K_{0}^{2}}}-{\frac {\Delta ^{2}}{F^{2}}}\right)^{1/2}} (9)
From these expressions and Fig. 1 it is seen that while the ground state is softened (destabilized) by the PJTE, the excited state is hardened (stabilized), and this effect is the larger, the smaller Δ {\displaystyle \Delta } and the larger F. It takes place in any polyatomic system and influences many molecular properties, including the existence of stable excited states of molecular systems that are unstable in the ground state (e.g., excited states of intermediates of chemical reactions); in general, even in the absence of instability the PJTE softens the ground state and increases the vibrational frequencies in the excited state.
The two branches of the APES for the case of strong PJTE resulting in the instability of the ground state (when the condition of instability (11) holds) are illustrated in Fig. 1b in comparison with the case when the two states have the same energy (Fig. 1a), i. e. when they are degenerate and the Jahn–Teller effect (JTE) takes place. We see that the two cases, degenerate and nondegenerate but close-in-energy (pseudo degenerate) are similar in generating two minima with distorted configurations, but there are important differences: while in the JTE there is a crossing of the two terms at the point of degeneracy (leading to conical intersections in more complicated cases), in the nondegenerate case with strong vibronic coupling there is an "avoided crossing" or "pseudo crossing". Even a more important difference between the two vibronic coupling effects emerges from the fact that the two interacting states in the JTE are components of the same symmetry type, whereas in the PJTE each of the two states may have any symmetry. For this reason, the possible kinds of distortion is very limited in the JTE, and unlimited in the PJTE. It is also noticeable that while the systems with JTE are limited by the condition of electron degeneracy, the applicability of the PJTE has no a priori limitations, as it includes also the cases of degeneracy. Even when the PJT coupling is weak and the inequality (11) does not hold, the PJTE is still significant in softening (lowering the corresponding vibrational frequency) of the ground state and increasing it in the excited state. [ 1 ] When considering the PJTE in an excited state, all the higher in energy states destabilize it, while the lower ones stabilize it.
For a better understanding it is important to follow up on how the PJTE is related to intramolecular interactions. In other words, what is the physical driving force of the PJTE distortions (transformations) in terms of well-known electronic structure and bonding? The driving force of the PJTE is added (improved) covalence: the PJTE distortion takes place when it results in an energy gain due to greater covalent bonding between the atoms in the distorted configuration. [ 1 ] [ 16 ] Indeed, in the starting high-symmetry configuration the wavefunctions of the electronic states, ground and excited, are orthogonal by definition. When the structure is distorted, their orthogonality is violated, and a nonzero overlap between them occurs. If for two near-neighbor atoms the ground state wavefunction pertains (mainly) to one atom and the excited state wavefunction belongs (mainly) to the other, the orbital overlap resulting from the distortion adds covalency to the bond between them, so the distortion becomes energetically favorable (Fig. 2).
Examples of the PJTE being used to explain chemical, physical, biological, and materials science phenomena are innumerable; as stated above, the PJTE is the only source of instability and distortions in high-symmetry configurations of molecular systems and solids with nondegenerate states, hence any phenomenon stemming from such instability can be explained in terms of the PJTE. Below are some illustrative examples.
PJTE versus Renner–Teller effect in bending distortions . Linear molecules are exceptions from the JTE, and for a long time it was assumed that their bending distortions in degenerate states (observed in many molecules) is produced by the Renner–Teller effect (RTE) (the splitting of the generate state by the quadratic terms of the vibronic coupling). However, recently it was proved [ 1 ] that the RTE, by splitting the degenerate electronic state, just softens the lower branch of the APES, but this lowering of the energy is not enough to overcome the rigidity of the linear configuration and to produce bending distortions. It follows that the bending distortion of linear molecular systems is due to, and only to the PJTE that mixes the electronic state under consideration with higher in energy (excited) states. This statement is enhanced by the fact that many linear molecules in nondegenerate states (and hence with no RTE) are, too, bent in the equilibrium configuration. The physical reason for the difference between the PJTE and the RTE in influencing the degenerate term is that while in the former case the vibronic coupling with the excited state produces additional covalent bonding that makes the distorted configuration preferable (see above, section 2.3), the RTE has no such influence; the splitting of the degenerate term in the RTE takes place just because the charge distribution in the two states becomes nonequivalent under the bending distortion.
Peierls distortion in linear chains . In linear molecules with three or more atoms there may be PJTE distortions that do not violate the linearity but change the interatomic distances. For instance, as a result of the PJTE a centrosymmetric linear system may become non-centrosymmetric in the equilibrium configurations, as, for example, in the BNB molecule (see in [ 1 ] ). An interesting extension of such distortions in sufficiently long (infinite) linear chains was first considered by Peierls. [ 20 ] In this case the electronic states, combinations of atomic states, are in fact band states, and it was shown that if the chain is composed by atoms with unpaired electrons, the valence band is only half filled, and the PJTE interaction between the occupied and unoccupied band states leads to the doubling of the period of the linear chain (see also in the books [ 15 ] [ 16 ] ).
Broken cylindrical symmetry . It was shown also that the PJTE not only produces the bending instability of linear molecules, but if the mixing electronic states involve a Δ state (a state with a nonzero momentum with respect to the axis of the molecule, its projection quantum number being Λ=2), the APES, simultaneously with the bending, becomes warped along the coordinate of rotations around the molecular axis, thus violating both the linear and cylindrical symmetry. [ 21 ] It happens because the PJTE, by mixing the wavefunctions of the two interacting states, transfers the high momentum of the electrons from states with Λ=2 to states with lower momentum, and this may alter significantly their expected rovibronic spectra.
PJTE and combined PJTE plus JTE effects in molecular structures . There is a practically unlimited number of molecular systems for which the origin of their structural properties was revealed and/or rationalized based on the PJTE, or a combination of the PJTE and JTE. The latter stems from the fact that in any system with a JTE in the ground state the presence of a PJT active excited state is not excluded, and vice versa, the active excited state for the PJTE of the ground one may be degenerate, and hence JT active. Examples are shown, e.g., in Refs., [ 1 ] [ 10 ] [ 11 ] [ 15 ] [ 17 ] [ 18 ] [ 19 ] including molecular systems like Na 3 , C 3 H 3 , C 4 X 4 (X= H, F, Cl, Br), CO 3 , Si 4 R 4 (with R as large ligands), planar cyclic C n H n , all kind of coordination systems of transition metals, mixed-valence compounds, biological systems, origin of conformations, geometry of ligands' coordination, and others. Indeed, it is difficult to find a molecular system for which the PJTE implications are a priori excluded, which is understandable in view of the mentioned above unique role of the PJTE in such instabilities. Three methods to quench the PJTE have been documented: changing the electronic charge of the molecule, [ 22 ] sandwiching the molecule with other ions and cyclic molecules, [ 23 ] [ 24 ] [ 25 ] and manipulating the environment of the molecule. [ 26 ]
Hidden PJTE, spin crossover , and magnetic-dielectric bistability . As mentioned above, there are molecular systems in which the ground state in the high-symmetry configuration is neither degenerate to trigger the JTE, nor does it interact with the low-lying excited states to produce the PJTE (e.g., because of their different spin multiplicity). In these situations, the instability is produced by a strong PJTE in the excited states; this is termed "hidden PJTE" in the sense that its origin is not seen explicitly as a PJTE in the ground state. An interesting typical situation of hidden PJTE emerges in molecular and solid-state systems with valence half-filed closed shells electronic configurations e 2 and t 3 . For instance, in the e 2 case the ground state in the high-symmetry equilibrium geometry is an orbital non-degenerate triplet 3 A, while the nearby low-lying two excited electronic states are close-in-energy singlets 1 E and 1 A; due to the strong PJT interaction between the latter, the lower component of 1 E crosses the triplet state to produce a global minimum with lower symmetry. Fig. 3 illustrates the hidden PJTE in the CuF 3 molecule, showing also the singlet-triplet spin crossover and the resulting two coexisting configurations of the molecule: high-symmetry (undistorted) spin-triplet state with a nonzero magnetic moment, and a lower in energy dipolar-distorted singlet state with zero magnetic moment. Such magnetic-dielectric bistability is inherent to a whole class of molecular systems and solids. [ 27 ]
Puckering in planar molecules and graphene-like 2D and quasi 2D systems . Special attention has been paid recently to 2D systems in view of a variety of their planar-surface-specific physical and chemical properties and possible graphene-like applications in electronics. Similar-to- graphene properties are sought for in silicene, phosphorene, boron nitride, zinc oxide, gallium nitride, as well as in 2D transition metal dichalkogenides and oxides, plus a number of other organic and inorganic 2D and quasi-2D compounds with expected similar properties. One of the main important features of these systems is their planarity or quasi-planarity, but many of the quasi-2D compounds are subject to out-of-plane deviations known as puckering (buckling).
The instability and distortions of the planar configuration (as in any other systems in nondegenerate state) was shown to be due to the PJTE. [ 1 ] [ 15 ] [ 16 ] Detailed exploration of the PJTE in such systems allows one to identify the excited states that are responsible for the puckering, and suggest possible external influence that restores their planarity, including oxidation, reduction, substitutions, or coordination to other species. [ 16 ] [ 28 ] Recent investigations have also extended to 3D compounds. [ 29 ]
Cooperative PJTE in BaTiO 3 -type crystals and ferroelectricity . In crystals with PJTE centers the interaction between the local distortions may lead to their ordering to produce a phase transition to a regular crystal phase with lower symmetry. Such cooperative PJTE is quite similar to the cooperative JTE; it was shown in one of the first studies of the PJTE in solid state systems [ 9 ] that in the case of ABO 3 crystals with perovskite structure the local dipolar PJTE distortions at the transition metal B center and their cooperative interactions lead to ferroelectric phase transitions. Provided the criterion for PJTE is met, each [BO 6 ] center has an APES with eight equivalent minima along the trigonal axes, six orthorhombic, and (higher) twelve tetragonal saddle-points between them. With temperature, the gradually reached transitions between the minima via the different kind of saddle-points explains the origin of all the four phases (three ferroelectric and one paraelectric) in perovskites of the type BaTiO 3 and their properties. The predicted by the theory trigonal displacement of the Ti ion in all four phases, the fully disordered PJTE distortions in the paraelectric phase, and their partially disordered state in two other phases was confirmed by a variety of experimental investigations (see in [ 1 ] [ 9 ] [ 15 ] [ 16 ] ).
Multiferroicity and magnetic-ferroelectric crossover . The PJTE theory of ferroelectricity in ABO3 crystals was expanded to show that, depending on the number of electrons in the d n shell of the transition metal ion B 4+ and their low spin or high spin arrangement (which controls the symmetry and spin multiplicity of the ground and PJTE active excited states of the [BO 6 ] center), the ferroelectricity may coexist with a magnetic moment ( multiferroicity ). Moreover, in combination with the temperature dependent spin crossover phenomenon (which changes the spin multiplicity), this kind of multiferroicity may lead to a novel effect known as a magnetic-ferroelectric crossover. [ 30 ]
Solid state magnetic-dielectric bistability . Similar to the above-mentioned molecular bistability induced by the hidden PJTE, a magnetic-dielectric bistability due to two coexisting equilibrium configurations with corresponding properties may take place also in crystals with transition metal centers, subject to the electronic configuration with half-filled e 2 or t 3 shells. [ 27 ] As in molecular systems, the latter produce a hidden PJTE and local bistability which, distinguished from the molecular case, are enhanced by the cooperative interactions, thus acquiring larger lifetimes. This crystal bistability was proved by calculations for LiCuO 2 and NaCuO 2 crystals, in which the Cu 3+ ion has the electronic e 2 (d 8 ) configuration (similar to the CuF 3 molecule). [ 27 ]
Giant enhancement of observable properties in interaction with external perturbations . In a recent development it was shown that in inorganic crystals with PJTE centers, in which the local distortions are not ordered (before the phase transition to the cooperative phase), the effect of interaction with external perturbations contains an orientational contribution which enhances the observable properties by several orders of magnitude. This was demonstrated on the properties of crystals like paraelectric BaTiO 3 in interaction with electric fields (in permittivity and electrostriction ), or under a strain gradient ( flexoelectricity ). These giant enhancement effects occur due to the dynamic nature of the PJTE local dipolar distortions (their tunneling between the equivalent minima); the independently rotating dipole moments on each center become oriented (frozen) along the external perturbation resulting in an orientational polarization which is not there in the absence of the PJTE [ 31 ] [ 32 ] | https://en.wikipedia.org/wiki/Pseudo_Jahn–Teller_effect |
Pseudo bit error ratio (PBER) in adaptive high-frequency (HF) radio , is a bit error ratio derived by majority logic decoding to processes redundant transmissions.
Note: In adaptive HF radio automatic link establishment , PBER is determined by the extent of error correction, such as by using the fraction of non-unanimous votes in the 2-of-3 majority decoder.
This article incorporates public domain material from Federal Standard 1037C . General Services Administration . Archived from the original on 2022-01-22. (in support of MIL-STD-188 ).
This article related to radio communications is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pseudo_bit_error_ratio |
A pseudoacid in organic chemistry is a cyclic oxocarboxylic acid . Most commonly, these form from aldehyde and keto carboxylic acids , and the cyclic forms are furanoid (5-membered ring with oxygen) or pyranoid (6-membered ring with oxygen). The original pseudoacid to be described as such (using the German Pseudosäuren ) was levulinic acid (4-oxopentanoic acid). [ 1 ]
Unlike the parent (open-form) oxocarboxylic acid, the pseudoacid has a chiral center .
The position of equilibrium in oxocarboxylic acids, toward the open form or the cyclic (pseuodacid) form, is influenced by a number of factors. In aliphatic 4- and 5-oxocarboxylic acids, intervening substituents assists in ring closure. Alkenes with the interacting groups substituted cis to each other also assists in ring closure. Aryl systems with the interacting groups substituted ortho to each other assists in ring closure. Other factors such as the gem-dialkyl effect ( Thorpe–Ingold effect ), electronic influences, and steric compression can also influence the open-cyclic equilibrium.
Like carboxylic acids, pseudoacids have "pseudoacyl" derivatives. These include pseudoacyl halides , pseudoesters, endocyclic and exocyclic-N pseudoamides, and pseudoanhydrides. Like aldehydes and ketones , pseudoacids have "pseudocarbonyl" derivatives also. | https://en.wikipedia.org/wiki/Pseudoacid |
In algebra, given a 2- monad T in a 2-category , a pseudoalgebra for T is a 2-category-version of algebra for T , that satisfies the laws up to coherent isomorphisms . [ 1 ]
This category theory -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pseudoalgebra |
Pseudoallelism is a state in which two genes with similar functions are located so close to one another on a chromosome that they are genetically linked. [ 1 ] [ 2 ] This means that the two genes (pseudoalleles) are nearly always inherited together. Since the two genes have related functions, they may appear to act as a single gene. In rare cases, the two linked pseudoalleles can be separated, or recombined. One hypothesis is that pseudoalleles are formed as a result of gene duplication events, and the duplicated genes can undergo gene evolution to develop new functions.
Characteristic of pseudoalleles:
Example:
Red eye colour of Drosophila has different mutants like white and apricot. They affect pigmentation i.e., affect the same character. So, they are allelic. They can undergo recombination, i.e., they are nonallelic. | https://en.wikipedia.org/wiki/Pseudoalleles |
The pseudoautosomal regions or PARs are homologous sequences of nucleotides found within the sex chromosomes of species with an XY [ 1 ] or ZW [ 2 ] mechanism of sex determination .
The pseudoautosomal regions get their name because any genes within them (so far at least 29 have been found for humans) [ 3 ] are inherited just like any autosomal genes. In humans, these regions are referred to as PAR1 and PAR2. [ 4 ] PAR1 comprises 2.6 Mbp of the short-arm tips of both X and Y chromosomes in humans and great apes (X and Y are 154 Mbp and 62 Mbp in total). PAR2 is at the tips of the long arms, spanning 320 kbp. [ 5 ] The monotremes , including the platypus and echidna , have a multiple sex chromosome system, and consequently have 8 pseudoautosomal regions. [ 6 ]
The locations of the PARs within GRCh38 are: [ 7 ]
The locations of the PARs within GRCh37 are:
Normal male therian mammals have two copies of these genes: one in the pseudoautosomal region of their Y chromosome, the other in the corresponding portion of their X chromosome. Normal females also possess two copies of pseudoautosomal genes, as each of their two X chromosomes contains a pseudoautosomal region. Crossing over between the X and Y chromosomes is normally restricted to the pseudoautosomal regions; thus, pseudoautosomal genes exhibit an autosomal, rather than sex-linked, pattern of inheritance. So, females can inherit an allele originally present on the Y chromosome of their father.
The function of these pseudoautosomal regions is that they allow the X and Y chromosomes to pair and properly segregate during meiosis in males. [ 9 ]
Pseudoautosomal genes are found in two different locations: PAR1 and PAR2. These are believed to have evolved independently. [ 11 ]
in mice , some PAR1 genes have transferred to autosomes . [ 13 ]
Pairing ( synapsis ) of the X and Y chromosomes and crossing over ( recombination ) between their pseudoautosomal regions appear to be necessary for the normal progression of male meiosis . [ 16 ] Thus, those cells in which X-Y recombination does not occur will fail to complete meiosis. Structural and/or genetic dissimilarity (due to hybridization or mutation ) between the pseudoautosomal regions of the X and Y chromosomes can disrupt pairing and recombination, and consequently cause male infertility.
The SHOX gene in the PAR1 region is the gene most commonly associated with and well understood with regards to disorders in humans, [ 17 ] but all pseudoautosomal genes escape X-inactivation and are therefore candidates for having gene dosage effects in sex chromosome aneuploidy conditions ( 45,X , 47,XXX , 47,XXY , 47,XYY , etc.).
Deletions have also been associated with Léri-Weill dyschondrosteosis [ 18 ] and Madelung's deformity . | https://en.wikipedia.org/wiki/Pseudoautosomal_region |
In materials science , pseudoelasticity , sometimes called superelasticity , is an elastic (reversible) response to an applied stress , caused by a phase transformation between the austenitic and martensitic phases of a crystal . It is exhibited in shape-memory alloys .
Pseudoelasticity is from the reversible motion of domain boundaries during the phase transformation, rather than just bond stretching or the introduction of defects in the crystal lattice (thus it is not true super elasticity but rather pseudo elasticity). Even if the domain boundaries do become pinned, they may be reversed through heating. Thus, a pseudoelastic material may return to its previous shape (hence, shape memory ) after the removal of even relatively high applied strains. One special case of pseudoelasticity is called the Bain Correspondence. This involves the austenite/martensite phase transformation between a face-centered crystal lattice (FCC) and a body-centered tetragonal crystal structure (BCT). [ 1 ]
Superelastic alloys belong to the larger family of shape-memory alloys . When mechanically loaded, a superelastic alloy deforms reversibly to very high strains (up to 10%) by the creation of a stress-induced phase . When the load is removed, the new phase becomes unstable and the material regains its original shape. Unlike shape-memory alloys, no change in temperature is needed for the alloy to recover its initial shape.
Superelastic devices take advantage of their large, reversible deformation and include antennas , eyeglass frames, and biomedical stents .
Nickel titanium (Nitinol) is an example of an alloy exhibiting superelasticity.
Recently, there have been interests of discovering materials exhibiting superelasticity in nanoscale for MEMS (Microelectromechanical systems) application. The ability to control the martensitic phase transformation has already been reported. [ 2 ] But the behavior of superelasticity has been observed to have size effects in nanoscale.
Qualitatively speaking, superelasticity is the reversible deformation by phase transformation. Therefore, it competes with the irreversible plastic deformation by dislocation motion. At nanoscale, the dislocation density and possible Frank–Read source sites are greatly reduced, so the yield stress is increased with reduced size. Therefore, for materials exhibiting superelasticity behavior in nanoscale, it has been found that they can operate in long-term cycling with little detrimental evolution. [ 3 ] On the other hand, the critical stress for martensitic phase transformation to occur is also increased because of the reduced possible sites for nucleation to begin. Nucleation usually begins near dislocation or at surface defects. But for nanoscale materials, the dislocation density is greatly reduced, and the surface is usually atomically smooth. Therefore, the phase transformation of nanoscale materials exhibiting superelasticity is usually found to be homogeneous, resulting in much higher critical stress. [ 4 ] Specifically, for Zirconia, where it has three phases, the competition between phase transformation and plastic deformation has been found to be orientation dependent, [ 5 ] indicating the orientation dependence of activation energy of dislocation and nucleation. Therefore, for nanoscale materials suitable for superelasticity, one should research on the optimized crystal orientation and surface roughness for most enhanced superelasticity effect. | https://en.wikipedia.org/wiki/Pseudoelasticity |
In logic , a pseudoelementary class is a class of structures derived from an elementary class (one definable in first-order logic ) by omitting some of its sorts and relations. It is the mathematical logic counterpart of the notion in category theory of (the codomain of) a forgetful functor , and in physics of (hypothesized) hidden variable theories purporting to explain quantum mechanics . Elementary classes are (vacuously) pseudoelementary but the converse is not always true; nevertheless pseudoelementary classes share some of the properties of elementary classes such as being closed under ultraproducts .
A pseudoelementary class is a reduct of an elementary class . That is, it is obtained by omitting some of the sorts and relations of a (many-sorted) elementary class.
A quasivariety defined logically as the class of models of a universal Horn theory can equivalently be defined algebraically as a class of structures closed under isomorphisms , subalgebras , and reduced products . Since the notion of reduced product is more intricate than that of direct product , it is sometimes useful to blend the logical and algebraic characterizations in terms of pseudoelementary classes. One such blended definition characterizes a quasivariety as a pseudoelementary class closed under isomorphisms, subalgebras, and direct products (the pseudoelementary property allows "reduced" to be simplified to "direct").
A corollary of this characterization is that one can (nonconstructively) prove the existence of a universal Horn axiomatization of a class by first axiomatizing some expansion of the structure with auxiliary sorts and relations and then showing that the pseudoelementary class obtained by dropping the auxiliary constructs is closed under subalgebras and direct products. This technique works for Example 2 because subalgebras and direct products of algebras of binary relations are themselves algebras of binary relations, showing that the class RRA of representable relation algebras is a quasivariety (and a fortiori an elementary class). This short proof is an effective application of abstract nonsense ; the stronger result by Tarski that RRA is in fact a variety required more honest toil. | https://en.wikipedia.org/wiki/Pseudoelementary_class |
Pseudoenzymes are variants of enzymes that are catalytically-deficient (usually inactive), meaning that they perform little or no enzyme catalysis . [ 1 ] They are believed to be represented in all major enzyme families in the kingdoms of life , where they have important signaling and metabolic functions, many of which are only now coming to light. [ 2 ] Pseudoenzymes are becoming increasingly important to analyse, especially as the bioinformatic analysis of genomes reveals their ubiquity. Their important regulatory and sometimes disease-associated functions in metabolic and signalling pathways are also shedding new light on the non-catalytic functions of active enzymes, of moonlighting proteins, [ 3 ] [ 4 ] the re-purposing of proteins in distinct cellular roles ( Protein moonlighting ). They are also suggesting new ways to target and interpret cellular signalling mechanisms using small molecules and drugs. [ 5 ] The most intensively analyzed, and certainly the best understood pseudoenzymes in terms of cellular signalling functions are probably the pseudokinases , the pseudoproteases and the pseudophosphatases. Recently, the pseudo-deubiquitylases have also begun to gain prominence. [ 6 ] [ 7 ]
The difference between enzymatically active and inactive homologues has been noted (and in some cases, understood when comparing catalytically active and inactive proteins residing in recognisable families) for some time at the sequence level, [ 8 ] owing to the absence of key catalytic residues. Some pseudoenzymes have also been referred to as 'prozymes' when they were analysed in protozoan parasites . [ 9 ] The best studied pseudoenzymes reside amongst various key signalling superfamilies of enzymes, such as the proteases , [ 10 ] the protein kinases , [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] [ 16 ] [ 17 ] protein phosphatases [ 18 ] [ 19 ] and ubiquitin modifying enzymes. [ 20 ] [ 21 ] The role of pseudoenzymes as "pseudo scaffolds" has also been recognised [ 22 ] and pseudoenzymes are now beginning to be more thoroughly studied in terms of their biology and function, in large part because they are also interesting potential targets (or anti-targets) for drug design in the context of intracellular cellular signalling complexes. [ 23 ] [ 24 ]
JAK1-3 and TYK2 C-terminal tyrosine kinase domains are regulated by their adjacent pseudokinase domain KSR1/2 regulates activation of the conventional protein kinase, Raf
STYX competes with DUSP4 for binding to ERK1/2 | https://en.wikipedia.org/wiki/Pseudoenzyme |
Pseudoephedrine , sold under the brand name Sudafed among others, is a sympathomimetic medication which is used as a decongestant to treat nasal congestion . [ 1 ] [ 13 ] [ 2 ] It has also been used off-label for certain other indications, like treatment of low blood pressure . [ 14 ] [ 15 ] [ 16 ] At higher doses, it may produce various additional effects including stimulant , [ 17 ] [ 1 ] appetite suppressant , [ 18 ] and performance-enhancing effects. [ 19 ] [ 20 ] In relation to this, non-medical use of pseudoephedrine has been encountered. [ 17 ] [ 1 ] [ 18 ] [ 19 ] [ 20 ] The medication is taken by mouth . [ 1 ] [ 2 ]
Side effects of pseudoephedrine include insomnia , elevated heart rate , increased blood pressure , restlessness , dizziness , anxiety , and dry mouth , among others. [ 21 ] [ 2 ] [ 1 ] [ 22 ] Rarely, pseudoephedrine has been associated with serious cardiovascular complications like heart attack and hemorrhagic stroke . [ 18 ] [ 23 ] [ 15 ] Some people may be more sensitive to its cardiovascular effects. [ 22 ] [ 1 ] Pseudoephedrine acts as a norepinephrine releasing agent , thereby indirectly activating adrenergic receptors . [ 24 ] [ 2 ] [ 25 ] [ 1 ] As such, it is an indirectly acting sympathomimetic . [ 24 ] [ 2 ] [ 25 ] [ 1 ] Pseudoephedrine significantly crosses into the brain , but has some peripheral selectivity due to its hydrophilicity . [ 25 ] [ 26 ] Chemically, pseudoephedrine is a substituted amphetamine and is closely related to ephedrine , phenylpropanolamine , and amphetamine . [ 1 ] [ 13 ] [ 2 ] It is the (1 S ,2 S )- enantiomer of β-hydroxy- N -methylamphetamine . [ 27 ]
Along with ephedrine, pseudoephedrine occurs naturally in ephedra , which has been used for thousands of years in traditional Chinese medicine . [ 13 ] [ 28 ] It was first isolated from ephedra in 1889. [ 28 ] [ 13 ] [ 29 ] Subsequent to its synthesis in the 1920s, pseudoephedrine was introduced for medical use as a decongestant. [ 13 ] Pseudoephedrine is widely available over-the-counter (OTC) in both single-drug and combination preparations . [ 30 ] [ 22 ] [ 13 ] [ 2 ] Availability of pseudoephedrine has been restricted starting in 2005 as it can be used to synthesize methamphetamine . [ 13 ] [ 2 ] Phenylephrine has replaced pseudoephedrine in many over-the-counter oral decongestant products. [ 2 ] However, oral phenylephrine appears to be ineffective as a decongestant. [ 31 ] [ 32 ] In 2022, the combination with brompheniramine and dextromethorphan was the 265th most commonly prescribed medication in the United States, with more than 1 million prescriptions. [ 33 ] [ 34 ] In 2022, the combination with loratadine was the 289th most commonly prescribed medication in the United States, with more than 500,000 prescriptions. [ 33 ] [ 35 ]
Pseudoephedrine is a sympathomimetic and is well-known for shrinking swollen nasal mucous membranes, so it is often used as a decongestant . It reduces tissue hyperemia , edema , and nasal congestion commonly associated with colds or allergies . Other beneficial effects may include increasing the drainage of sinus secretions, and opening of obstructed Eustachian tubes . The same vasoconstriction action can also result in hypertension , which is a noted side effect of pseudoephedrine.
Pseudoephedrine can be used either as oral or as topical decongestant . Due to its stimulating qualities, however, the oral preparation is more likely to cause adverse effects, including urinary retention . [ 36 ] [ 37 ] According to one study, pseudoephedrine may show effectiveness as an antitussive drug (suppression of cough ). [ 38 ]
Pseudoephedrine is indicated for the treatment of nasal congestion, sinus congestion, and Eustachian tube congestion. [ 39 ] Pseudoephedrine is also indicated for vasomotor rhinitis and as an adjunct to other agents in the optimum treatment of allergic rhinitis , croup , sinusitis , otitis media , and tracheobronchitis . [ 39 ]
Amphetamine-type stimulants and other catecholaminergic agents are known to have wakefulness-promoting effects and are used in the treatment of hypersomnia and narcolepsy . [ 40 ] [ 41 ] [ 42 ] Pseudoephedrine at therapeutic doses does not appear to improve or worsen daytime sleepiness , daytime fatigue , or sleep quality in people with allergic rhinitis . [ 1 ] [ 43 ] Likewise, somnolence was not lower in children with the common cold treated with pseudoephedrine for nasal congestion. [ 44 ] In any case, insomnia is a known side effect of pseudoephedrine, although the incidence is low. [ 21 ] In addition, doses of pseudoephedrine above the normal therapeutic range have been reported to produce stimulant effects including insomnia and fatigue resistance. [ 17 ]
There has been interest in pseudoephedrine as an appetite suppressant for the treatment of obesity . [ 18 ] However, due to lack of clinical data and potential cardiovascular side effects, this use is not recommended. [ 18 ] Only a single placebo - controlled study of pseudoephedrine for weight loss exists (120 mg/day slow-release for 12 weeks) and found no significant difference in weight lost compared to placebo (-4.6 kg vs. -4.5 kg). [ 18 ] [ 45 ] This was in contrast to phenylpropanolamine , which has been found to be more effective at promoting weight loss compared to placebo and has been more widely studied and used in the treatment of obesity. [ 46 ] [ 47 ] [ 45 ]
Pseudoephedrine has been used limitedly in the treatment of orthostatic intolerance including orthostatic hypotension [ 14 ] and postural orthostatic tachycardia syndrome (POTS). [ 16 ] [ 48 ] [ 49 ] However, its effectiveness in the treatment of POTS is controversial. [ 16 ] [ 48 ] Pseudoephedrine has also been used limitedly in the treatment of refractory hypotension in intensive care units . [ 15 ] However, data on this use are limited to case reports and case series . [ 15 ]
Pseudoephedrine is also used as a first-line prophylactic for recurrent priapism . [ 50 ] Erection is largely a parasympathetic response, so the sympathetic action of pseudoephedrine may serve to relieve this condition. Data for this use are however anecdotal and effectiveness has been described as variable. [ 50 ]
Treatment of urinary incontinence is an off-label use for pseudoephedrine and related medications. [ 51 ] [ 52 ]
Pseudoephedrine is available by itself over-the-counter in the form of 30 and 60 mg immediate-release and 120 and 240 mg extended-release oral tablets in the United States . [ 30 ] [ 53 ] [ 54 ] [ 55 ]
Pseudoephedrine is also available over-the-counter and prescription-only in combination with numerous other drugs, including antihistamines ( acrivastine , azatadine , brompheniramine , cetirizine , chlorpheniramine , clemastine , desloratadine , dexbrompheniramine , diphenhydramine , fexofenadine , loratadine , triprolidine ), analgesics ( acetaminophen , codeine , hydrocodone , ibuprofen , naproxen ), cough suppressants ( dextromethorphan ), and expectorants ( guaifenesin ). [ 30 ] [ 53 ]
Pseudoephedrine has been used in the form of the hydrochloride and sulfate salts and in a polistirex form. [ 30 ] The drug has been used in more than 135 over-the-counter and prescription formulations. [ 22 ] Many prescription formulations containing pseudoephedrine have been discontinued over time. [ 30 ]
Pseudoephedrine is contraindicated in patients with diabetes mellitus , cardiovascular disease , severe or uncontrolled hypertension , severe coronary artery disease , prostatic hypertrophy , hyperthyroidism , closed-angle glaucoma , or by pregnant women. [ 56 ] The safety and effectiveness of nasal decongestant use in children is unclear. [ 57 ]
Common side effects with pseudoephedrine therapy may include central nervous system (CNS) stimulation , insomnia , restlessness , excitability , dizziness , and anxiety . [ 18 ] [ 15 ] [ 58 ] Infrequent side effects include tachycardia or palpitations . [ 18 ] Rarely, pseudoephedrine therapy may be associated with mydriasis (dilated pupils), hallucinations , arrhythmias , hypertension , seizures , and ischemic colitis ; as well as severe skin reactions known as recurrent pseudo-scarlatina, systemic contact dermatitis , and non-pigmenting fixed drug eruption . [ 18 ] [ 59 ] [ 56 ] Pseudoephedrine, particularly when combined with other drugs including narcotics , may also play a role in the precipitation of episodes of psychosis . [ 18 ] [ 60 ] It has also been reported that pseudoephedrine, among other sympathomimetic agents, may be associated with the occurrence of hemorrhagic stroke and other cardiovascular complications . [ 18 ] [ 23 ] [ 15 ]
Due to its sympathomimetic effects, pseudoephedrine is a vasoconstrictor and pressor agent (increases blood pressure ), a positive chronotrope (increases heart rate ), and a positive inotrope (increases force of heart contractions ). [ 18 ] [ 1 ] [ 22 ] [ 19 ] [ 20 ] The influence of pseudoephedrine on blood pressure at clinical doses is controversial. [ 1 ] [ 22 ] A closely related sympathomimetic and decongestant, phenylpropanolamine , was withdrawn due to associations with markedly increased blood pressure and incidence of hemorrhagic stroke. [ 22 ] There has been concern that pseudoephedrine may likewise dangerously increase blood pressure and thereby increase the risk of stroke, whereas others have contended that the risks are exaggerated. [ 1 ] [ 22 ] Besides hemorrhagic stroke, myocardial infarction , coronary vasospasm , and sudden death have also rarely been reported with sympathomimetic ephedra compounds like pseudoephedrine and ephedrine . [ 18 ] [ 15 ]
A 2005 meta-analysis found that pseudoephedrine at recommended doses had no meaningful effect on systolic or diastolic blood pressure in healthy individuals or people with controlled hypertension . [ 1 ] [ 22 ] Systolic blood pressure was found to slightly increase by 0.99 mm Hg on average and heart rate was found to slightly increase by 2.83 bpm on average. [ 1 ] [ 22 ] Conversely, there was no significant influence on diastolic blood pressure, which increased by 0.63 mg Hg. [ 22 ] In people with controlled hypertension, systolic hypertension increased by a similar degree of 1.20 mm Hg. [ 22 ] Immediate-release preparations, higher doses, being male, and shorter duration of use were all associated with greater cardiovascular effects. [ 22 ] A small subset of individuals with autonomic instability , perhaps in turn resulting in greater adrenergic receptor sensitivity, may be substantially more sensitive to the cardiovascular effects of sympathomimetics. [ 22 ] Subsequent to the 2005 meta-analysis, a 2015 systematic review and a 2018 meta-analysis found that pseudoephedrine at high doses (>170 mg) could increase heart rate and physical performance with larger effect sizes than lower doses. [ 19 ] [ 20 ]
A 2007 Cochrane review assessed the side effects of short-term use of pseudoephedrine at recommended doses as a nasal decongestant. [ 21 ] It found that pseudoephedrine had a small risk of insomnia and this was the only side effect that occurred at rates significantly different from placebo. [ 21 ] Insomnia occurred at a rate of 5% and had an odds ratio (OR) of 6.18. [ 21 ] Other side effects, including headache and hypertension , occurred at rates of less than 4% and were not different from placebo. [ 21 ]
Tachyphylaxis is known to develop with prolonged use of pseudoephedrine, especially when it is re-administered at short intervals. [ 1 ] [ 18 ]
There is a case report of temporary depressive symptoms upon discontinuation and withdrawal from pseudoephedrine. [ 18 ] [ 61 ] The withdrawal symptoms included worsened mood and sadness , profoundly decreased energy , a worsened view of oneself, decreased concentration, psychomotor retardation , increased appetite , and increased need for sleep . [ 18 ] [ 61 ]
Pseudoephedrine has psychostimulant effects at high doses and is a positive reinforcer with amphetamine -like effects in animals including rats and monkeys. [ 62 ] [ 63 ] [ 64 ] [ 65 ] However, it is substantially less potent than methamphetamine or cocaine . [ 62 ] [ 63 ] [ 64 ]
The maximum total daily dose of pseudoephedrine is 240 mg. [ 1 ] Symptoms of overdose may include sedation , apnea , impaired concentration, cyanosis , coma , circulatory collapse , insomnia , hallucinations , tremors , convulsions , headache , dizziness , anxiety , euphoria , tinnitus , blurred vision , ataxia , chest pain , tachycardia , palpitations , increased blood pressure , decreased blood pressure , thirstiness , sweating , difficulty with urination , nausea , and vomiting . [ 1 ] In children, symptoms have more often included dry mouth , pupil dilation , hot flashes , fever , and gastrointestinal dysfunction . [ 1 ] Pseudoephedrine may produce toxic effects both with use of supratherapeutic doses but also in people who are more sensitive to the effects of sympathomimetics. [ 1 ] Misuse of the drug has been reported in one case at massive doses of 3,000 to 4,500 mg (100–150 × 30-mg tablets) per day, with the doses gradually increased over time by this individual. [ 1 ] [ 66 ] No fatalities due to pseudoephedrine misuse have been reported as of 2021. [ 17 ] However, death with pseudoephedrine has been reported generally. [ 1 ] [ 13 ] [ 18 ]
Concomitant or recent (previous 14 days) monoamine oxidase inhibitor (MAOI) use can lead to hypertensive reactions , including hypertensive crisis , and should be avoided. [ 1 ] [ 56 ] Clinical studies have found minimal or no influence of certain MAOIs like the weak non-selective MAOI linezolid and the potent selective MAO-B inhibitor selegiline (as a transdermal patch ) on the pharmacokinetics of pseudoephedrine. [ 67 ] [ 68 ] [ 69 ] [ 70 ] This is in accordance with the fact that pseudoephedrine is not metabolized by monoamine oxidase (MAO). [ 25 ] [ 11 ] [ 71 ] However, pseudoephedrine induces the release of norepinephrine , which MAOIs inhibit the metabolism of, and as such, MAOIs can still potentiate the effects of pseudoephedrine. [ 72 ] [ 1 ] [ 68 ] No significant pharmacodynamic interactions have been found with selegiline, [ 68 ] [ 70 ] but linezolid potentiated blood pressure increases with pseudoephedrine. [ 67 ] [ 69 ] However, this was deemed to be without clinical significance in the case of linezolid, though it was noted that some individuals may be more sensitive to the sympathomimetic effects of pseudoephedrine and related agents. [ 67 ] [ 69 ] Pseudoephedrine is contraindicated with MAOIs like phenelzine , tranylcypromine , isocarboxazid , and moclobemide due to the potential for synergistic sympathomimetic effects and hypertensive crisis. [ 1 ] [ 18 ] It is also considered to be contraindicated with linezolid and selegiline as some individuals may react more sensitively to coadministration. [ 67 ] [ 69 ] [ 68 ] [ 70 ]
Concomitant use of pseudoephedrine with other vasoconstrictors , including ergot alkaloids like ergotamine and dihydroergotamine , linezolid , oxytocin , ephedrine , phenylephrine , and bromocriptine , among others, is not recommended due to the possibility of greater increases in blood pressure and risk of hemorrhagic stroke . [ 1 ] Sympathomimetic effects and cardiovascular risks of pseudoephedrine may also be increased with digitalis glycosides , tricyclic antidepressants , appetite suppressants , and inhalational anesthetics . [ 1 ] Likewise, greater sympathomimetic effects of pseudoephedrine may occur when it is combined with other sympathomimetic agents. [ 18 ] Rare but serious cardiovascular complications have been reported with the combination of pseudoephedrine and bupropion . [ 13 ] [ 73 ] [ 74 ] Increase of ectopic pacemaker activity can occur when pseudoephedrine is used concomitantly with digitalis . [ 1 ] The antihypertensive effects of methyldopa , guanethidine , mecamylamine , reserpine , and veratrum alkaloids may be reduced by sympathomimetics like pseudoepehdrine. [ 1 ] Beta blockers like labetalol may reduce the effects of pseudoephedrine. [ 75 ] [ 76 ]
Urinary acidifying agents like ascorbic acid and ammonium chloride can increase the excretion of and thereby reduce exposure to amphetamines including pseudoephedrine, whereas urinary alkalinizing agents including antacids like sodium bicarbonate as well as acetazolamide can reduce the excretion of these agents and thereby increase exposure to them. [ 1 ] [ 11 ] [ 77 ]
Pseudoephedrine is a sympathomimetic agent which acts primarily or exclusively by inducing the release of norepinephrine . [ 78 ] [ 25 ] [ 2 ] [ 24 ] Hence, it is an indirectly acting sympathomimetic. [ 78 ] [ 25 ] [ 2 ] Some sources state that pseudoephedrine has a mixed mechanism of action consisting of both indirect and direct effects by binding to and acting as an agonist of adrenergic receptors . [ 1 ] [ 15 ] However, the affinity of pseudoephedrine for adrenergic receptors is described as very low or negligible. [ 78 ] Animal studies suggest that the sympathomimetic effects of pseudoephedrine are exclusively due to norepinephrine release. [ 79 ] [ 80 ]
Pseudoephedrine induces monoamine release in vitro with an EC 50 Tooltip half maximal effective concentration of 224 nM for norepinephrine and 1,988 nM for dopamine , whereas it is inactive for serotonin . [ 24 ] [ 86 ] [ 82 ] As such, it is about 9-fold selective for induction of norepinephrine release over dopamine release. [ 24 ] [ 86 ] [ 82 ] The drug has negligible agonistic activity at the α 1 - and α 2 -adrenergic receptors (K act >10,000 nM). [ 24 ] At the β 1 - and β 2 -adrenergic receptors , it acts as a partial agonist with relatively low affinity (β 1 = K act = 309 μM, IA Tooltip intrinsic activity = 53%; β 2 = 10 μM; IA = 47%). [ 87 ] It was an antagonist or very weak partial agonist of the β 3 -adrenergic receptor (K act = ND ; IA = 7%). [ 87 ] It is about 30,000 to 40,000 times less potent as a β-adrenergic receptor agonist than (–)-isoproterenol . [ 87 ]
Pseudoephedrine's principal mechanism of action relies on its action on the adrenergic system. [ 88 ] [ 89 ] The vasoconstriction that pseudoephedrine produces is believed to be principally an α-adrenergic receptor response. [ 90 ] Pseudoephedrine acts on α- and β 2 -adrenergic receptors, to cause vasoconstriction and relaxation of smooth muscle in the bronchi, respectively. [ 88 ] [ 89 ] α-Adrenergic receptors are located on the muscles lining the walls of blood vessels. When these receptors are activated, the muscles contract, causing the blood vessels to constrict (vasoconstriction). The constricted blood vessels now allow less fluid to leave the blood vessels and enter the nose, throat, and sinus linings, which results in decreased inflammation of nasal membranes, as well as decreased mucus production. Thus, by constriction of blood vessels, mainly those located in the nasal passages, pseudoephedrine causes a decrease in the symptoms of nasal congestion. [ 2 ] Activation of β 2 -adrenergic receptors produces relaxation of the smooth muscle of the bronchi, [ 88 ] causing bronchial dilation and in turn decreasing congestion (although not fluid) and difficulty breathing.
Pseudoephedrine is less potent as a sympathomimetic and psychostimulant than ephedrine . [ 1 ] [ 58 ] Clinical studies have found that pseudoephedrine is about 3.5- to 4-fold less potent than ephedrine as a sympathomimetic agent in terms of blood pressure increases and 3.5- to 7.2-fold or more less potent as a bronchodilator . [ 58 ] Pseudoephedrine is also said to have much less central effect than ephedrine and to be only a weak psychostimulant. [ 25 ] [ 58 ] [ 2 ] [ 78 ] [ 65 ] Blood vessels in the nose are around five times more sensitive than the heart to the actions of circulating epinephrine (adrenaline), which may help to explain how pseudoephedrine at the low doses used in over-the-counter products can produce nasal decongestion with minimal effects on the heart. [ 2 ] Compared to dextroamphetamine , pseudoephedrine is about 30 to 35 times less potent as a norepinephrine releasing agent and 80 to 350 times less potent as a dopamine releasing agent in vitro . [ 24 ] [ 83 ] [ 84 ]
Pseudoephedrine is a very weak reversible inhibitor of monoamine oxidase (MAO) in vitro , including both MAO-A and MAO-B (K i = 1,000–5,800 μM). [ 91 ] It is far less potent in this action than other agents like dextroamphetamine and moclobemide . [ 91 ]
Pseudoephedrine is orally active and is readily absorbed from the gastrointestinal tract . [ 1 ] [ 2 ] Its oral bioavailability is approximately 100%. [ 8 ] The drug reaches peak concentrations after 1 to 4 hours (mean 1.9 hours) in the case of the immediate-release formulation and after 2 to 6 hours in the case of the extended-release formulation. [ 1 ] [ 2 ] The onset of action of pseudoephedrine is 30 minutes. [ 1 ]
Pseudoephedrine, due to its lack of polar phenolic groups , is relatively lipophilic . [ 11 ] This is a property it shares with related sympathomimetic and decongestant agents like ephedrine and phenylpropanolamine . [ 11 ] These agents are widely distributed throughout the body and cross the blood–brain barrier . [ 11 ] However, it is said that pseudoephedrine and phenylpropanolamine cross the blood-brain barrier only to some extent and that pseudoephedrine has limited central nervous system activity, suggesting that it is partially peripherally selective . [ 25 ] [ 26 ] The blood-brain barrier permeability of pseudoephedrine, ephedrine, and phenylpropanolamine is reduced compared to other amphetamines due to the presence of a hydroxyl group at the β carbon which decreases their lipophilicity . [ 26 ] As such, they have a greater ratio of peripheral cardiovascular to central psychostimulant effect. [ 26 ] Besides entering the brain, these substances also cross the placenta and enter breast milk . [ 11 ]
The plasma protein binding of pseudoephedrine has been reported to be approximately 21 to 29%. [ 9 ] [ 10 ] It is bound to α 1 -acid glycoprotein (AGP) and albumin (HSA). [ 9 ] [ 10 ]
Pseudoephedrine is not extensively metabolized and is subjected to minimal first-pass metabolism with oral administration. [ 11 ] [ 1 ] [ 2 ] Due to its methyl group at the α carbon (i.e., it is an amphetamine ), pseudoephedrine is not a substrate for monoamine oxidase (MAO) and is not metabolized by this enzyme . [ 25 ] [ 11 ] [ 71 ] [ 72 ] It is also not metabolized by catechol O -methyltransferase (COMT). [ 25 ] Pseudoephedrine is demethylated into the metabolite norpseudoephedrine to a small extent. [ 1 ] [ 11 ] Similarly to pseudoephedrine, this metabolite is active and shows amphetamine -like effects. [ 11 ] Approximately 1 to 6% of pseudoephedrine is metabolized in the liver via N -demethylation to form norpseudoephedrine. [ 1 ]
Pseudoephedrine is excreted primarily via the kidneys in urine . [ 1 ] [ 11 ] Its urinary excretion is highly influenced by urinary pH and is increased when the urine is acidic and is decreased when it is alkaline . [ 1 ] [ 11 ] [ 58 ]
The elimination half-life of pseudoephedrine on average is 5.4 hours [ 2 ] and ranges from 3 to 16 hours depending on urinary pH. [ 1 ] [ 11 ] At a pH of 5.6 to 6.0, the elimination half-life of pseudoephedrine was 5.2 to 8.0 hours. [ 11 ] In one study, a more acidic pH of 5.0 resulted in a half-life of 3.0 to 6.4 hours, whereas a more alkaline pH of 8.0 resulted in a half-life of 9.2 to 16.0 hours. [ 11 ] Substances that influence urinary acidity and are known to affect the excretion of amphetamine derivatives include urinary acidifying agents like ascorbic acid and ammonium chloride as well as urinary alkalinizing agents like acetazolamide . [ 77 ]
A majority of an oral dose of pseudoephedrine is excreted unchanged in urine within 24 hours of administration. [ 11 ] This has been found to range from 43 to 96%. [ 1 ] [ 11 ] [ 2 ] The amount excreted unchanged is dependent on urinary pH similarly to the drug's half-life, as a longer half-life and duration in the body allows more time for the drug to be metabolized. [ 11 ]
The duration of action of pseudoephedrine, which is dependent on its elimination , is 4 to 12 hours. [ 1 ] [ 12 ]
Pseudoephedrine has been reported to accumulate in people with renal impairment . [ 92 ] [ 93 ] [ 94 ]
Pseudoephedrine, also known structurally as (1 S ,2 S )-α, N -dimethyl-β-hydroxyphenethylamine or as (1 S ,2 S )- N -methyl-β-hydroxyamphetamine, is a substituted phenethylamine , amphetamine , and β-hydroxyamphetamine derivative . [ 1 ] [ 13 ] [ 2 ] It is a diastereomer of ephedrine . [ 28 ]
Pseudoephedrine is a small-molecule compound with the molecular formula C 10 H 15 NO and a molecular weight of 165.23 g/mol. [ 27 ] [ 95 ] It has an experimental log P of 0.89, while its predicted log P values range from 0.9 to 1.32. [ 27 ] [ 95 ] [ 96 ] The compound is relatively lipophilic , [ 11 ] but is also more hydrophilic than other amphetamines. [ 26 ] The lipophilicity of amphetamines is closely related to their brain permeability. [ 97 ] For comparison to pseudoephedrine, the experimental log P of methamphetamine is 2.1, [ 98 ] of amphetamine is 1.8, [ 99 ] [ 98 ] of ephedrine is 1.1, [ 100 ] of phenylpropanolamine is 0.7, [ 101 ] of phenylephrine is -0.3, [ 102 ] and of norepinephrine is -1.2. [ 103 ] Methamphetamine has high brain permeability, [ 98 ] whereas phenylephrine and norepinephrine are peripherally selective drugs . [ 2 ] [ 104 ] The optimal log P for brain permeation and central activity is about 2.1 (range 1.5–2.7). [ 105 ]
Pseudoephedrine is readily reduced into methamphetamine or oxidized into methcathinone . [ 1 ]
The dextrorotary (+)- or d- enantiomer is (1 S ,2 S )-pseudoephedrine, whereas the levorotating (−)- or l- form is (1 R ,2 R )-pseudoephedrine.
In the outdated D/L system (+)-pseudoephedrine is also referred to as L- pseudoephedrine and (−)-pseudoephedrine as D- pseudoephedrine (in the Fischer projection then the phenyl ring is drawn at bottom). [ 106 ] [ 107 ]
Often the D/L system (with small caps ) and the d/l system (with lower-case ) are confused. The result is that the dextrorotary d-pseudoephedrine is wrongly named D- pseudoephedrine and the levorotary l-ephedrine (the diastereomer) wrongly L- ephedrine.
The IUPAC names of the two enantiomers are (1 S ,2 S )- respectively (1 R ,2 R )-2-methylamino-1-phenylpropan-1-ol. Synonyms for both are psi -ephedrine and threo -ephedrine.
Pseudoephedrine is the INN Tooltip International Nonproprietary Name of the (+)-form, when used as pharmaceutical substance. [ 108 ]
Pseudoephedrine may be quantified in blood, plasma, or urine to monitor any possible performance-enhancing use by athletes, confirm a diagnosis of poisoning, or to assist in a medicolegal death investigation. Some commercial immunoassay screening tests directed at the amphetamines cross-react appreciably with pseudoephedrine, but chromatographic techniques can easily distinguish pseudoephedrine from other phenethylamine derivatives. Blood or plasma pseudoephedrine concentrations are typically in the 50 to 300 μg/L range in persons taking the drug therapeutically, 500 to 3,000 μg/L in people with substance use disorder involving pseudoephedrine or poisoned patients, and 10 to 70 mg/L in cases of acute fatal overdose . [ 109 ] [ 110 ]
Although pseudoephedrine occurs naturally as an alkaloid in certain plant species (for example, as a constituent of extracts from the Ephedra species, also known as ma huang , in which it occurs together with other isomers of ephedrine ), the majority of pseudoephedrine produced for commercial use is derived from yeast fermentation of dextrose in the presence of benzaldehyde . In this process, specialized strains of yeast (typically a variety of Candida utilis or Saccharomyces cerevisiae ) are added to large vats containing water, dextrose and the enzyme pyruvate decarboxylase (such as found in beets and other plants). After the yeast has begun fermenting the dextrose, the benzaldehyde is added to the vats, and in this environment, the yeast converts the ingredients to the precursor l-phenylacetylcarbinol (L-PAC). L-PAC is then chemically converted to pseudoephedrine via reductive amination . [ 111 ]
The bulk of pseudoephedrine is produced by commercial pharmaceutical manufacturers in India and China, where economic and industrial conditions favor its mass production for export. [ 112 ]
Pseudoephedrine, along with ephedrine , occurs naturally in ephedra . [ 13 ] [ 28 ] [ 113 ] This herb has been used for thousands of years in traditional Chinese medicine . [ 13 ] [ 28 ] [ 113 ] Pseudoephedrine was first isolated and characterized in 1889 by the German chemists Ladenburg and Oelschlägel, who used a sample that had been isolated from Ephedra vulgaris by the Merck pharmaceutical corporation of Darmstadt , Germany. [ 28 ] [ 29 ] [ 114 ] It was first synthesized in the 1920s in Japan . [ 13 ] Subsequently, pseudoephedrine was introduced for medical use as a decongestant. [ 13 ]
Pseudoephedrine is the generic name of the drug and its INN Tooltip International Nonproprietary Name and BAN Tooltip British Approved Name , while pseudoéphédrine is its DCF Tooltip Dénomination Commune Française and pseudoefedrina is its DCIT Tooltip Denominazione Comune Italiana . [ 115 ] [ 116 ] [ 117 ] [ 118 ] Pseudoephedrine hydrochloride is its USAN Tooltip United States Adopted Name and BANM Tooltip British Approved Name in the case of the hydrochloride salt ; pseudoephedrine sulfate is its USAN in the case of the sulfate salt; pseudoephedrine polistirex its USAN in the case of the polistirex form; and d-isoephedrine sulfate is its JAN Tooltip Japanese Accepted Name in the case of the sulfate salt. [ 115 ] [ 116 ] [ 117 ] [ 118 ] Pseudoephedrine is also known as Ψ-ephedrine and isoephedrine . [ 115 ] [ 117 ]
The following is a list of consumer medicines that either contain pseudoephedrine or have switched to a less-regulated alternative such as phenylephrine .
Over-the-counter pseudoephedrine has been misused as a psychostimulant . [ 17 ] Six case reports and one case series of pseudoephedrine misuse have been published as of 2021. [ 17 ] There is a case report of self-medication with pseudoephedrine in massive doses for treatment of depression . [ 17 ] [ 66 ]
Pseudoephedrine has been used as a performance-enhancing drug in exercise and sports due to its sympathomimetic and stimulant effects. [ 19 ] [ 20 ] Because of these effects, pseudoephedrine can increase heart rate , elevate blood pressure , improve mental energy , and reduce fatigue , among other performance-enhancing effects. [ 19 ] [ 20 ] [ 22 ]
A 2015 systematic review found that pseudoephedrine lacked performance-enhancing effects at therapeutic doses (60–120 mg) but significantly enhanced athletic performance at supratherapeutic doses (≥180 mg). [ 19 ] A subsequent 2018 meta-analysis , which included seven additional studies, found that pseudoephedrine had a small positive effect on heart rate ( SMD Tooltip standardized mean difference = 0.43) but insignificant effects on time trials, perceived exertion ratings, blood glucose levels, and blood lactate levels. [ 20 ] However, subgroup analyses revealed that effect sizes were larger for heart rate increases and quicker time trials in well-trained athletes and younger participants, for shorter exercise sessions with pseudoephedrine administered within 90 minutes beforehand, and with higher doses of pseudoephedrine. [ 20 ] A dose–response relationship was established, with larger doses (>170 mg) showing greater increases in heart rate and faster time trials than with smaller doses (≤170 mg) ( SMD = 0.85 for heart rate and SMD = -0.24 for time trials, respectively). [ 20 ] In any case, the meta-analysis concluded that the performance-enhancing effects of pseudoephedrine were marginal to small and likely to be lower in magnitude than with caffeine . [ 20 ] It is relevant in this regard that caffeine is a permitted stimulant in competitive sports. [ 20 ]
Pseudoephedrine was on the International Olympic Committee 's (IOC) banned substances list until 2004 when the World Anti-Doping Agency (WADA) list replaced the IOC list. Although WADA initially only monitored pseudoephedrine, it went back onto the "banned" list on 1 January 2010. [ 122 ]
Pseudoephedrine is excreted through urine, and the concentration in urine of this drug shows a large inter-individual spread; that is, the same dose can give a vast difference in urine concentration for different individuals. [ 123 ] Pseudoephedrine is approved to be taken up to 240 mg per day. In seven healthy male subjects, this dose yielded a urine concentration range of 62.8 to 294.4 microgram per milliliter (μg/mL) with mean ± standard deviation 149 ± 72 μg/mL. [ 124 ] Thus, normal dosage of 240 mg pseudoephedrine per day can result in urine concentration levels exceeding the limit of 150 μg/mL set by WADA for about half of all users. [ 125 ] Furthermore, hydration status does not affect the urinary concentration of pseudoephedrine. [ 126 ]
Its membership in the amphetamine class has made pseudoephedrine a sought-after chemical precursor in the illicit manufacture of methamphetamine and methcathinone . [ 1 ] As a result of the increasing regulatory restrictions on the sale and distribution of pseudoephedrine, pharmaceutical firms have reformulated medications to use alternative compounds, particularly phenylephrine , even though its efficacy as an oral decongestant has been demonstrated to be indistinguishable from placebo. [ 136 ]
In the United States, federal laws control the sale of pseudoephedrine-containing products. [ 137 ] [ 138 ] [ 139 ] Retailers in the US have created corporate policies restricting the sale of pseudoephedrine-containing products. [ 140 ] [ 141 ] Their policies restrict sales by limiting purchase quantities and requiring a minimum age and government issued photographic identification. [ 138 ] [ 139 ] These requirements are similar to and sometimes more stringent than existing law. Internationally, pseudoephedrine is listed as a Table I precursor under the United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances . [ 142 ]
Illicit diversion of pseudoephedrine in Australia has caused significant changes to the way the products are regulated. As of 2006 [update] , all products containing pseudoephedrine have been rescheduled as either "Pharmacist Only Medicines" (Schedule 3) or "Prescription Only Medicines" (Schedule 4), depending on the amount of pseudoephedrine in the product. A Pharmacist Only Medicine may only be sold to the public if a pharmacist is directly involved in the transaction. These medicines must be kept behind the counter, away from public access.
Pharmacists are also encouraged (and in some states required) to log purchases with the online database Project STOP. [ 143 ]
As a result, some pharmacies no longer stock Sudafed, the common brand of pseudoephedrine cold/sinus tablets, opting instead to sell Sudafed PE, a phenylephrine product that has not been proven effective in clinical trials. [ 136 ] [ 144 ] [ 2 ]
Until 2024, several formulations of pseudoephedrine were available over-the-counter in Belgium. [ 145 ] However, new legislation came into effect in November 2024, banning the over-the-counter sale of all medicines containing pseudoephedrine. [ 146 ] [ 147 ]
Health Canada has investigated the risks and benefits of pseudoephedrine and ephedrine / Ephedra . Near the end of the study, Health Canada issued a warning on their website stating that those who are under the age of 12, or who have heart disease and may have strokes, should avoid taking pseudoephedrine and ephedrine. Also, they warned that everyone should avoid taking ephedrine or pseudoephedrine with other stimulants like caffeine . They also banned all products that contain both ephedrine (or pseudoephedrine) and caffeine. [ 148 ]
Products whose only medicinal ingredient is pseudoephedrine must be kept behind the pharmacy counter. Products containing pseudoephedrine along with other medicinal ingredients may be displayed on store shelves but may be sold only in a pharmacy when a pharmacist is present. [ 149 ] [ 150 ]
The Colombian government prohibited the trade of pseudoephedrine in 2010. [ 151 ]
Pseudoephedrine is an over-the-counter drug in Estonia. [ 152 ]
Pseudoephedrine medicines can only be obtained with a prescription in Finland. [ 153 ] [ failed verification ]
Pseudoephedrine-containing combination products are available over the counter from pharmacies, most commonly with Paracetamol under the brand name "Dolihrume". Products combining pseudoephedrine and ibuprofen or certain antihistamines are also available. However, products containing pseudoephedrine as a single ingredient are not sold. [ citation needed ] In October 2023, the French health department officially warned against the usage of pseudoephedrine for patients with a cold. It also suggested the substance's availability could be restricted in the future, pending its pharmaceutical re-evaluation on EU level. [ 154 ] [ 155 ] In December 2024, the government announced pseudoephedrine medicines would henceforth only be obtainable with a prescription.
Various pseudoephedrine-containing products in combination with ibuprofen, aspirin , or antihistamines can be obtained without a prescription upon request at a pharmacy. Common names include Aspirin Complex, Reactine Duo, and RhinoPront. Products containing pseudoephedrine as a single ingredient are not available. [ citation needed ]
Medications that contain more than 10% pseudoephedrine are prohibited under the Stimulants Control Law in Japan. [ 156 ]
On 23 November 2007, the use and trade of pseudoephedrine in Mexico was made illegal as it was argued that it was extremely popular as a precursor in the synthesis of methamphetamine. [ 157 ]
Pseudoephedrine was withdrawn from sale in 1989 due to concerns about adverse cardiac side effects. [ 158 ]
Since April 2024, pseudoephedrine has been classified as a restricted (pharmacist-only) drug in the Misuse of Drugs Act 1975 which allows the purchase of medicines containing pseudoephedrine from a pharmacist without a prescription. [ 159 ]
Pseudoephedrine, ephedrine, and any product containing these substances, e.g. cold and flu medicines, were first classified in October 2004 as Class C Part III (partially exempted) controlled drugs, due to being the principal ingredient in methamphetamine. [ 160 ] New Zealand Customs and police officers continued to make large interceptions of precursor substances believed to be destined for methamphetamine production. On 9 October 2009, Prime Minister John Key announced pseudoephedrine-based cold and flu tablets would become prescription-only drugs and reclassified as a class B2 drug. [ 161 ] The law was amended by The Misuse of Drugs Amendment Bill 2010, which passed in August 2011. [ 162 ]
On 24 November 2023, the recently formed National-led coalition government announced that the sale of cold medication containing pseudoephedrine would be allowed (as part of the coalition agreement between the National and ACT parties). [ 163 ]
Pseudoephedrine is available without a prescription in combination (with aspirin ) under the brand name "Aspirin Complex". There is also a preparation consisting of a single ingredient 120 mg extended-release tablet that can be obtained at pharmacies with a prescription or after consultation with a pharmacist. [ citation needed ]
In Turkey, medications containing pseudoephedrine are available by prescription only. [ 164 ]
In the UK, pseudoephedrine is available over the counter under the supervision of a qualified pharmacist, or on prescription. In 2007, the MHRA reacted to concerns over the diversion of ephedrine and pseudoephedrine for the illicit manufacture of methamphetamine by introducing voluntary restrictions limiting over-the-counter sales to one box containing no more than 720 mg of pseudoephedrine in total per transaction. These restrictions became law in April 2008. [ 165 ] No form of ID is required.
The United States Congress has recognized that pseudoephedrine is used in the illegal manufacture of methamphetamine. In 2005, the Committee on Education and the Workforce heard testimony concerning education programs and state legislation designed to curb this illegal practice. [ citation needed ]
Attempts to control the sale of the drug date back to 1986, when federal officials at the Drug Enforcement Administration (DEA) first drafted legislation, later proposed by Senator Bob Dole , that would have placed several chemicals used in the manufacture of illicit drugs under the Controlled Substances Act . The bill would have required each transaction involving pseudoephedrine to be reported to the government, and federal approval of all imports and exports. Fearing this would limit legitimate use of the drug, lobbyists from over the counter drug manufacturing associations sought to stop this legislation from moving forward and were successful in exempting from the regulations all chemicals that had been turned into a legal final product, such as Sudafed. [ 166 ]
Before the passage of the Combat Methamphetamine Epidemic Act of 2005 , sales of the drug became increasingly regulated, as DEA regulators and pharmaceutical companies continued to fight for their respective positions. The DEA continued to make greater progress in its attempts to control pseudoephedrine as methamphetamine production skyrocketed, becoming a serious problem in the western United States. When purity dropped, so did the number of people in rehab and people admitted to emergency rooms with methamphetamine in their systems. This reduction in purity was usually short-lived, however, as methamphetamine producers eventually found a way around the new regulations. [ 167 ]
Congress passed the Combat Methamphetamine Epidemic Act of 2005 (CMEA) as an amendment to the renewal of the USA Patriot Act . [ 138 ] Signed into law by President George W. Bush on 6 March 2006, [ 137 ] the act amended 21 U.S.C. § 830 , concerning the sale of pseudoephedrine-containing products. The law mandated two phases, the first needing to be implemented by 8 April 2006, and the second to be completed by 30 September 2006. The first phase dealt primarily with implementing the new buying restrictions based on the amount, while the second phase encompassed the requirements of storage, employee training, and record keeping. [ 168 ] Though the law was mainly directed at pseudoephedrine products it also applies to all over-the-counter products containing ephedrine, pseudoephedrine, and phenylpropanolamine, their salts, optical isomers, and salts of optical isomers. [ 168 ] Pseudoephedrine was defined as a " scheduled listed chemical product " under 21 U.S.C. § 802 (45(A)). The act included the following requirements for merchants ("regulated sellers") who sell such products:
The requirements were revised in the Methamphetamine Production Prevention Act of 2008 to require that a regulated seller of scheduled listed chemical products may not sell such a product unless the purchaser: [ 139 ]
Most states also have laws regulating pseudoephedrine. [ 169 ] [ 170 ] [ 171 ]
The states of Alabama, Arizona, Arkansas, California, Colorado, Delaware, Florida, Georgia, Hawaii (as of May 1, 2009 [update] ) Idaho, Illinois, Indiana, Iowa, Kansas, Kentucky, Louisiana (as of August 15, 2009 [update] ), Massachusetts, Michigan, Minnesota, Mississippi, Missouri, Montana, Nebraska, [ 172 ] Nevada, New Jersey, North Carolina, Ohio, Oklahoma, Oregon, Pennsylvania, South Dakota, Tennessee, Texas, Utah, Vermont, Virginia, Washington, West Virginia and Wisconsin have laws requiring pharmacies to sell pseudoephedrine "behind the counter". Though the drug can be purchased without a prescription, states can limit the number of units sold and can collect personal information from purchasers. [ 173 ]
The states of Oregon and Mississippi previously required a prescription for the purchase of products containing pseudoephedrine. However, as of 1 January 2022, these restrictions have been repealed. [ 174 ] [ 175 ] The state of Oregon reduced the number of methamphetamine lab seizures from 448 in 2004 (the final full year before implementation of the prescription only law) [ 176 ] to a new low of 13 in 2009. [ 177 ] The decrease in meth lab incidents in Oregon occurred largely before the prescription-only law took effect, according to a NAMSDL report titled Pseudoephedrine Prescription Laws in Oregon and Mississippi . [ 173 ] The report posits that the decline in meth lab incidents in both states may be due to other factors: "Mexican traffickers may have contributed to the decline in meth labs in Mississippi and Oregon (and surrounding states) as they were able to provide ample supply of equal or greater quality meth at competitive prices". Additionally, similar decreases in meth lab incidents were seen in surrounding states, according to the report, and meth-related deaths in Oregon have dramatically risen since 2007. Some municipalities in Missouri have enacted similar ordinances, including Washington , [ 178 ] Union , [ 179 ] New Haven , [ 180 ] Cape Girardeau [ 181 ] and Ozark . [ 182 ] Certain pharmacies in Terre Haute, Indiana do so as well. [ 183 ]
Another approach to controlling the drug on the state level mandated by some state governments to control the purchases of their citizens is the use of electronic tracking systems, which require the electronic submission of specified purchaser information by all retailers who sell pseudoephedrine. Thirty-two states now require the National Precursor Log Exchange (NPLEx) to be used for every pseudoephedrine and ephedrine OTC purchase, and ten of the eleven largest pharmacy chains in the US voluntarily contribute all of their similar transactions to NPLEx. These states have seen dramatic results in reducing the number of methamphetamine laboratory seizures. Before the implementation of the system in Tennessee in 2005, methamphetamine laboratory seizures totaled 1,497 in 2004 but were reduced to 955 in 2005, and 589 in 2009. [ 177 ] Kentucky's program was implemented statewide in 2008, and since statewide implementation, the number of laboratory seizures has significantly decreased. [ 177 ] Oklahoma initially experienced success with its tracking system after implementation in 2006, as the number of seizures dropped in that year and again in 2007. In 2008, however, seizures began rising again, and have continued to rise in 2009. [ 177 ]
NPLEx appears to be successful by requiring the real-time submission of transactions, thereby enabling the relevant laws to be enforced at the point of sale. By creating a multi-state database and the ability to compare all transactions quickly, NPLEx enables pharmacies to deny purchases that would be illegal based on gram limits, age, or even to convicted meth offenders in some states. NPLEx also enforces the federal gram limits across state lines, which was impossible with state-operated systems. Access to the records is by law enforcement agencies only, through an online secure portal. [ 184 ]
Pseudoephedrine has been studied in the treatment of snoring . [ 185 ] However, data are inadequate to support this use. [ 185 ]
A study has found that pseudoephedrine can reduce milk production in breastfeeding women. [ 186 ] [ 187 ] This might have been due to suppression of prolactin secretion . [ 187 ] Pseudoephedrine might be useful for lactation suppression . [ 186 ] [ 187 ] | https://en.wikipedia.org/wiki/Pseudoephedrine |
Pseudoextinction (or phyletic extinction ) of a species occurs when all members of the species are extinct , but members of a daughter species remain alive. The term pseudoextinction refers to the evolution of a species into a new form, with the resultant disappearance of the ancestral form. Pseudoextinction results in the relationship between ancestor and descendant still existing even though the ancestor species no longer exists. [ 1 ]
The classic example is that of the non-avian dinosaurs . [ 2 ] While the non-avian dinosaurs of the Mesozoic died out, their descendants, birds , live on today. Many other families of bird-like dinosaurs also died out as the heirs of the dinosaurs continued to evolve, but because birds continue to thrive in the world today their ancestors are only pseudoextinct. [ 3 ]
From a taxonomic perspective, pseudoextinction is "within an evolutionary lineage, the disappearance of one taxon caused by the appearance of the next." [ 4 ] The pseudoextinction of a species can be arbitrary, simply resulting from a change in the naming of a species as it evolves from its ancestral form to its descendant form. [ 5 ] Taxonomic pseudoextinction has to do with the disappearance of taxa that are categorized together by taxonomists. As they are just grouped together, their extinction is not reflected through lineage; therefore, unlike evolutionary pseudoextinction, taxonomic pseudoextinction does not alter the evolution of daughter species. [ 6 ] From an evolutionary perspective, pseudoextinction entails the loss of a species as a result of the creation of a new one. As the primordial species evolves into its daughter species, either by anagenesis or cladogenesis, the ancestral species can be subject to extinction. Throughout the process of evolution, a taxon can disappear; in this case, pseudoextinction is considered an evolutionary event. [ 6 ]
From a genetic perspective, pseudoextinction is the "disappearance of a taxon by virtue of its being evolved by anagenesis into another taxon." [ 7 ] As all species must have an ancestor of a previous species, much of evolution is believed to occur through pseudoextinction. However, it is difficult to prove that any particular fossil species is pseudoextinct unless genetic information has been preserved. For example, it is sometimes claimed that the extinct Hyracotherium (an ancient horse-like animal commonly known as an eohippus) is pseudoextinct, rather than extinct, because several species of horse , including the zebra and the donkey, are extant today. However, it is not known, and probably cannot be known, whether modern horses actually descend from members of the genus Hyracotherium , or whether they simply share a common ancestor. [ 8 ]
One proposed mechanism of pseudoextinction is endocrine disruption (changing hormone levels). Additionally, when the primary sex-ratio (male to female ratio of a population) is male-biased, predicted levels of pseudoextinction increase. [ 9 ] Because the variance of the population size increases with time, the probability of pseudoextinction increases with the length of the time horizon used. [ 10 ]
Mammal systematist and paleobiologist David Archibald has estimated that as many as 25% of the extinctions recorded in three different early Puercan mammal lineages are pseudoextinctions. [ 11 ] Pseudotermination is an extreme form of pseudoextinction, when a lineage continues as a new species; phylogeny is often difficult to determine in such cases. [ 12 ]
Extirpation or regional disappearance can be a stage in pseudoextinction when progressive diachronous range contraction leads to final extinction by the elimination of the last refuge or population growth from this temporal bottleneck. [ 12 ]
The notion of pseudoextinction is sometimes applied to wider taxa than species . For instance, the entire superorder Dinosauria, as traditionally conceived, would have to be considered as pseudoextinct, because feathered dinosaurs are considered by the majority of modern palaeontologists as the ancestors of modern-day birds . Pseudoextinction for such higher taxa appears to be easier to prove. However, pseudoextinct higher taxa are paraphyletic groups, which are rejected as formal taxa in phylogenetic nomenclature ; either all dinosaurs are stem-group birds, or birds are derived dinosaurs, but there is no taxon Dinosauria, acceptable in cladistic taxonomy , that excludes the taxon Aves. Pseudoextinction cannot be applied to the genus or family levels as, “when a species evolves to a new form, causing the pseudoextinction of the ancestral form, the new species is normally assigned the same higher taxa as the ancestor.” When a family or genus goes extinct it must be true extinction, because pseudoextinction would mean that at least one member of the family or genus is still extant. [ 13 ]
Pseudoextinction is an event that occurs much more frequently under the assumption of a Phyletic gradualism model of evolution, under which speciation is slow, uniform and gradual. [ 14 ] The majority of speciation would occur through anagenesis under this model, resulting in a majority of species undergoing Pseudoextinction. However, the model of punctuated equilibrium is more widely accepted, with the proposal that most species remain in stasis, a state of very little evolutionary change, for a large proportion of the species' lifespan. [ 15 ] This would result in increased cases of speciation through cladogenesis and true extinction, with fewer cases of Pseudoextinction. Nearly all species undergo true extinction under the model of punctuated equilibrium. [ 15 ] Charles Darwin proposed the idea of stasis in his book, On the Origin of Species . He suggested that species spend the majority of their evolutionary lifespan in the same form, having undergone very little morphological or genetic change. [ 16 ]
Another concept of species on the tree of life is the composite species concept. It sees one species as occupying all internodes of the tree that have the same combination of (morphological, ecological, etc.) characters. Here, a species starts with the acquisition of a character and ends when another change is fixed in its lineage. This process - of one lineage turning into what is afterwards seen as another species because of the fixation of a novel character - is often called anagenesis. In this situation, a species is considered to end by definition but is not really extinct (it survived, after all, in the form of one descendent species with a different character combination), and so it could also be considered to be a pseudoextinction. On the other hand, under the composite species concept [ 17 ] a species continues through a lineage split if only one of the two resulting lineages acquires a new character, an event that is sometimes called speciation through "budding". The one that has a new character is now a new species but the other lineage, the one that looks identical to the common ancestor, is considered to be the common ancestor. An example would be a widespread breeding group remaining unchanged while "budding off" a small isolated population that accumulates changes until it cannot interbreed with the others any more. [ 18 ] | https://en.wikipedia.org/wiki/Pseudoextinction |
Pseudohypoxia refers to a condition that mimics hypoxia, by having sufficient oxygen yet impaired mitochondrial respiration due to a deficiency of necessary co-enzymes , such as NAD + and TPP . [ 1 ] [ 2 ] [ 3 ] The increased cytosolic ratio of free NADH/NAD + in cells (more NADH than NAD + ) can be caused by diabetic hyperglycemia and by excessive alcohol consumption. [ 2 ] [ 3 ] Low levels of TPP results from thiamine deficiency . [ 1 ] [ 4 ]
The insufficiency of available NAD + or TPP produces symptoms similar to hypoxia (lack of oxygen), because they are needed primarily by the Krebs cycle for oxidative phosphorylation , and NAD + to a lesser extent in anaerobic glycolysis. [ 3 ] Oxidative phosphorylation and glyocolysis are vital as these metabolic pathways produce ATP , which is the molecule that releases energy necessary for cells to function.
As there is not enough NAD + or TPP for aerobic glycolysis nor fatty acid oxidation, anaerobic glycolysis is excessively used which turns glycogen and glucose into pyruvate, and then the pyruvate into lactate ( fermentation ). Fermentation also generates a small amount of NAD + from NADH, but only enough to keep anaerobic glycolysis going. The excessive use of anaerobic glycolysis disrupts the lactate/pyruvate ratio causing lactic acidosis . The decreased pyruvate inhibits gluconeogenesis and increases release of fatty acids from adipose tissue. In the liver, the increase of plasma free fatty acids results in increased ketone production (which in excess causes ketoacidosis ). The increased plasma free fatty acids, increased acetyl-CoA (accumulating from reduced Krebs cycle function), and increased NADH all contribute to increased fatty acid synthesis within the liver (which in excess causes fatty liver disease ). [ 3 ]
Pseudohypoxia also leads to hyperuricemia as elevated lactic acid inhibits uric acid secretion by the kidney; as well as the energy shortage from inhibited oxidative phosphorylation leads to increased turnover of adenosine nucleotides by the myokinase reaction and purine nucleotide cycle . [ 3 ]
Research has shown that declining levels of NAD + during aging cause pseudohypoxia, and that raising nuclear NAD + in old mice reverses pseudohypoxia and metabolic dysfunction, thus reversing the aging process. [ 5 ] It is expected that human NAD trials will begin in 2014. [ 6 ]
Pseudohypoxia is a feature commonly noted in poorly-controlled diabetes . [ 2 ]
In poorly controlled diabetes, as insulin is insufficient, glucose cannot enter the cell and remains high in the blood (hyperglycemia). The polyol pathway converts glucose into fructose, which can then enter the cell without requiring insulin. [ 7 ] [ 8 ] The oxidative damage done to cells in diabetes damages DNA and causes poly (ADP ribose) polymerases or PARPs to be activated, such as PARP1 . Both processes reduce the available NAD + . [ 7 ]
In ethanol catabolism , ethanol is converted into acetate, consuming NAD + . [ 3 ] When alcohol is consumed in small quantities, the NADH/NAD + ratio remains in balance enough for the acetyl-CoA (converted from acetate) to be used for oxidative phosphorylation. However, even moderate amounts of alcohol (1-2 drinks) results in more NADH than NAD + , which inhibits oxidative phosphorylation. In chronic excessive alcohol consumption, the microsomal ethanol oxidizing system (MEOS) is used in addition to alcohol dehydrogenase. [ 3 ]
D-glucose + NADPH → Sorbitol + NADP + (catalyzed by aldose reductase)
Sorbitol + NAD + → D-fructose + NADH (catalyzed by sorbitol dehydrogenase)
Protein + NAD + → Protein + ADP-ribose + nicotinamide (catalyzed by PARP1)
Ethanol + NAD + → Acetaldehyde + NADH + H + (catalyzed by alcohol dehydrogenase)
Acetaldehyde + NAD + → Acetate + NADH + H + (catalyzed by aldehyde dehydrogenase)
Ethanol + NADPH + H + + O 2 → Acetaldehyde + NADP + + 2H 2 O (catalyzed by CYP2E1)
Acetaldehyde + NAD + → Acetate + NADH + H + (catalyzed by aldehyde dehydrogenase) | https://en.wikipedia.org/wiki/Pseudohypoxia |
In the theory of partially ordered sets , a pseudoideal is a subset characterized by a bounding operator LU.
LU( A ) is the set of all lower bounds of the set of all upper bounds of the subset A of a partially ordered set .
A subset I of a partially ordered set ( P , ≤) is a Doyle pseudoideal , if the following condition holds:
For every finite subset S of P that has a supremum in P , if S ⊆ I {\displaystyle S\subseteq I} then LU ( S ) ⊆ I {\displaystyle \operatorname {LU} (S)\subseteq I} .
A subset I of a partially ordered set ( P , ≤) is a pseudoideal , if the following condition holds:
For every subset S of P having at most two elements that has a supremum in P , if S ⊆ {\displaystyle \subseteq } I then LU( S ) ⊆ {\displaystyle \subseteq } I . | https://en.wikipedia.org/wiki/Pseudoideal |
Pseudokinases are catalytically-deficient pseudoenzyme [ 1 ] variants of protein kinases that are represented in all kinomes across the kingdoms of life. Pseudokinases have both physiological ( signal transduction ) and pathophysiological functions. [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ]
The phrase pseudokinase was first coined in 2002. [ 9 ] They were subsequently sub-classified into different 'classes'. [ 10 ] [ 8 ] [ 11 ] [ 12 ] [ 13 ] Several pseudokinase-containing families are found in the human kinome , including the Tribbles pseudokinases, which are at the interface between kinase and ubiquitin E3 ligase signalling. [ 14 ] [ 15 ] [ 16 ]
The human pseudokinases (and their pseudophosphatase cousins) are implicated in a wide variety of diseases, [ 17 ] [ 18 ] which has made them potential drug targets and antitargets ). [ 19 ] [ 20 ] [ 21 ] [ 22 ] Pseudokinases are made up of an evolutionary mixture of eukaryotic protein kinase (ePK) and non ePK-related pseudoenzyme proteins (e.g., FAM20A , which binds ATP [ 23 ] and is pseudokinase due to a conserved glutamate to glutamine swap in the alpha-C helix. [ 24 ] FAM20A is implicated in periodontal disease , and serves to control the catalytic activity of FAM20C , an important physiological casein kinase that controls phosphorylation of proteins in the Golgi apparatus that are destined for secretion, [ 25 ] such as the milk protein casein .
A comprehensive evolutionary analysis confirms that pseudokinases group into multiple subfamilies, and these are found in the annotated kinome of organisms across the kingdoms of life, including prokaryotes, archaea and all eukaryotic lineages with an annotated proteome ; this data is searchable in ProKino ( http://vulcan.cs.uga.edu/prokino/about/browser ). [ 26 ] Some pseudokinases can still bind to ATP, or catalyse an atypical reaction involving migrated catalytic residues; moreover structural prediction algorithms such as AlphaFold can be used to analyse pseudokinase folding [ 27 ] Some pseudokinases show species specific adaptions, including the vertebrate pseudokinase PSKH2, which like the closely-related secretory-pathway Ser/Thr kinase PSKH1 is a client of the HSP90 molecular chaperone system in human cells. [ 28 ] | https://en.wikipedia.org/wiki/Pseudokinase |
In genetics , pseudolinkage is a characteristic of a heterozygote for a reciprocal translocation , in which genes located near the translocation breakpoint behave as if they are linked even though they originated on nonhomologous chromosomes .
Linkage is the proximity of two or more markers on a chromosome ; the closer together the markers are, the lower the probability that they will be separated by recombination . Genes are said to be linked when the frequency of parental type progeny exceeds that of recombinant progeny.
During meiosis in a translocation homozygote, chromosomes segregate normally according to Mendelian principles. Even though the genes have been rearranged during crossover , both haploid sets of chromosomes in the individual have the same rearrangement . As a result, all chromosomes will find a single partner with which to pair at meiosis, and there will be no deleterious consequences for the progeny .
In translocation heterozygote, however, certain patterns of chromosome segregation during meiosis produce genetically unbalanced gametes that at fertilization become deleterious to the zygote . In a translocation heterozygote, the two haploid sets of chromosomes do not carry the same arrangement of genetic information. As a result, during prophase of the first meiotic division, the translocated chromosomes and their normal homologs assume a crosslike configuration in which four chromosomes, rather than the normal two, pair to achieve a maximum of synapsis between similar regions. We denote the chromosomes carrying translocated material with a T and the chromosomes with a normal order of genes with an N. Chromosomes N1 and T1 have homologous centromeres found in wild type on chromosome 1; N2 and T2 have centromeres found in wild type on chromosome 2.
During anaphase of meiosis I, the mechanisms that attach the spindle to the chromosomes in this crosslike configuration still usually ensure the disjunction of homologous centromeres, bringing homologous chromosomes to opposite spindle poles. Depending on the arrangement of the four chromosomes on the metaphase plate , this normal disjunction of homologous produces one of two equally likely patterns of segregation.
In the alternate segregation pattern, the two translocation chromosomes (T1 and T2) go to one pole, while the two normal chromosomes (N1 and N2) move to the opposite pole. Both kinds of gametes resulting from this segregation (T1, T2, and N1, N2) carry the correct haploid number of genes; and the zygotes formed by union of these gametes with normal gamete will be viable.
In the adjacent-1 segregation pattern, homologous centromeres disjoin so that T1 and N2 go to one pole, while the N1 and T2 go to the opposite pole. Consequently, each gamete contains a large duplication (of the region found in both the normal and the translocated chromosome in that gamete) and a correspondingly large deletion (of the region found in neither of the chromosomes in that gametes), which make them genetically unbalanced. Zygotes formed by union of these gametes with a normal gametes are usually not viable .
Because of the unusual cruciform pairing configuration in translocation heterozygotes, nondisjunction of homologous centromeres occurs at a measurable but low rate. This nondisjunction produces an adjacent-2 segregation pattern in which the homologous centromeres N1 and T1 go to the same spindle pole while the homologous centromeres N2 and T2 go to the other spindle pole. The resulting genetic imbalances are lethal after fertilization to the zygotes containing them.
Thus, in a translocation heterozygote, only the alternate segregation pattern yields viable progeny in outcrosses, the equally likely adjacent-1 pattern and the rare adjacent-2 pattern do not.
Because of this, genes near the translocation breakpoints on the nonhomologous chromosomes participating in a reciprocal translocation exhibit pseudolinkage: They behave as if they are linked. | https://en.wikipedia.org/wiki/Pseudolinkage |
Pseudomathematics , or mathematical crankery , is a mathematics -like activity that does not adhere to the framework of rigor of formal mathematical practice. Common areas of pseudomathematics are solutions of problems proved to be unsolvable or recognized as extremely hard by experts, as well as attempts to apply mathematics to non-quantifiable areas. A person engaging in pseudomathematics is called a pseudomathematician or a pseudomath . [ 1 ] Pseudomathematics has equivalents in other scientific fields, and may overlap with other topics characterized as pseudoscience .
Pseudomathematics often contains mathematical fallacies whose executions are tied to elements of deceit rather than genuine, unsuccessful attempts at tackling a problem. Excessive pursuit of pseudomathematics can result in the practitioner being labelled a crank . Because it is based on non-mathematical principles, pseudomathematics is not related to misguided attempts at genuine proofs . Indeed, such mistakes are common in the careers of amateur mathematicians , some of whom go on to produce celebrated results. [ 1 ]
The topic of mathematical crankery has been extensively studied by mathematician Underwood Dudley , who has written several popular works about mathematical cranks and their ideas.
One common type of approach is claiming to have solved a classical problem that has been proven to be mathematically unsolvable. Common examples of this include the following constructions in Euclidean geometry – using only a compass and straightedge :
For more than 2,000 years, many people had tried and failed to find such constructions; in the 19th century they were all proven impossible. [ 5 ] [ 6 ] : 47
Another notable case were "Fermatists", who plagued mathematical institutions with requests to check their proofs of Fermat's Last Theorem . [ 7 ] [ 8 ]
Another common approach is to misapprehend standard mathematical methods, and to insist that the use or knowledge of higher mathematics is somehow cheating or misleading (e.g., the denial of Cantor's diagonal argument [ 9 ] : 40ff or Gödel's incompleteness theorems ). [ 9 ] : 167ff
The term pseudomath was coined by the logician Augustus De Morgan , discoverer of De Morgan's laws , in his A Budget of Paradoxes (1872). De Morgan wrote:
The pseudomath is a person who handles mathematics as the monkey handled the razor. The creature tried to shave himself as he had seen his master do; but, not having any notion of the angle at which the razor was to be held, he cut his own throat. He never tried a second time, poor animal! but the pseudomath keeps on at his work, proclaims himself clean-shaved, and all the rest of the world hairy. [ 10 ]
De Morgan named James Smith as an example of a pseudomath who claimed to have proved that π is exactly 3 + 1 / 8 . [ 1 ] Of Smith, De Morgan wrote: "He is beyond a doubt the ablest head at unreasoning, and the greatest hand at writing it, of all who have tried in our day to attach their names to an error." [ 10 ] The term pseudomath was adopted later by Tobias Dantzig . [ 11 ] Dantzig observed:
With the advent of modern times, there was an unprecedented increase in pseudomathematical activity. During the 18th century, all scientific academies of Europe saw themselves besieged by circle-squarers, trisectors, duplicators, and perpetuum mobile designers, loudly clamoring for recognition of their epoch-making achievements. In the second half of that century, the nuisance had become so unbearable that, one by one, the academies were forced to discontinue the examination of the proposed solutions. [ 11 ]
The term pseudomathematics has been applied to attempts in mental and social sciences to quantify the effects of what is typically considered to be qualitative. [ 12 ] More recently, the same term has been applied to creationist attempts to refute the theory of evolution , by way of spurious arguments purportedly based in probability or complexity theory , such as intelligent design proponent William Dembski 's concept of specified complexity . [ 13 ] [ 14 ] | https://en.wikipedia.org/wiki/Pseudomathematics |
The Pseudomonas Genome Database is a database of genomic annotations for Pseudomonas genomes. [ 1 ]
This Biological database -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pseudomonas_Genome_Database |
Φ6 ( Phi 6) is the best-studied bacteriophage of the virus family Cystoviridae . It infects Pseudomonas bacteria (typically plant-pathogenic P. syringae ). It has a three-part, segmented, double-stranded RNA genome , totalling ~13.5 kb in length. Φ6 and its relatives have a lipid membrane around their nucleo capsid , a rare trait among bacteriophages. It is a lytic phage , though under certain circumstances has been observed to display a delay in lysis which may be described as a "carrier state".
The genome of Φ6 codes for 12 proteins . P1 is a major capsid protein which is responsible of forming the skeleton of the polymerase complex. In the interior of the shell formed by P1 is the P2 viral replicase and transcriptase protein. The spikes binding to receptors on the Φ6 virion are formed by the protein P3. P4 is a nucleoside-triphosphatase which is required for the genome packaging and transcription. P5 is a lytic enzyme. The spike protein P3 is anchored to a fusogenic envelope protein in P6. P7 is a minor capsid protein, P8 is responsible of forming the nucleocapsid surface shell and P9 is a major envelope protein. [ 3 ] P12 is a non-structural morphogenic protein shown to be a part of the envelope assembly. [ 4 ] P10 and P13 are proteins coding genes that are associated with the viral envelope and P14 is a non-structural protein. [ 3 ]
Φ6 typically attaches to the Type IV pilus of P. syringae with its attachment protein, P3. It is thought that the cell then retracts its pilus, pulling the phage toward the bacterium. Fusion of the viral envelope with the bacterial outer membrane is facilitated by the phage protein, P6. The muralytic ( peptidoglycan -digesting) enzyme, P5, then digests a portion of the cell wall , and the nucleocapsid enters the cell coated with the bacterial outer membrane.
A copy of the sense strand of the large genome segment (6374 bases ) is then synthesized ( transcription ) on the vertices of the capsid , with the RNA-dependent RNA polymerase , P2, and released into the host cell cytosol . The four proteins translated from the large segment spontaneously assemble into procapsids , which then package a large segment sense strand, polymerizing its complement during entry through the P2 polymerase -containing vertices.
While the large segment is being translated (expressed) and synthesized (replicated), the parental phage releases copies of the sense strands of the medium segment (4061 bases) and small segment (2948 bases) into the cytosol . They are translated, and packaged into the procapsids in order: medium then small. The filled capsids are then coated with the nucleocapsid protein P8, and then outer membrane proteins somehow attract bacterial inner membrane , which then envelopes the nucleocapsid.
The lytic protein, P5, is contained between the P8 nucleocapsid shell and the viral envelope. The completed phage progeny remain in the cytosol until sufficient levels of the lytic protein P5 degrade the host cell wall. The cytosol then bursts forth, disrupting the outer membrane, releasing the phage. The bacterium is killed by this lysis .
RNA-dependent RNA polymerases (RdRPs) are critical components in the life cycle of double-stranded RNA (dsRNA) viruses . However, it is not fully understood how these important enzymes function during viral replication. Expression and characterization of the purified recombinant RdRP of Φ6 is the first direct demonstration of RdRP activity catalyzed by a single protein from a dsRNA virus . The recombinant Φ6 RdRP is highly active in vitro , possesses RNA replication and transcription activities, and is capable of using both homologous and heterologous RNA molecules as templates. The crystal structure of the Φ6 polymerase, solved in complex with a number of ligands, provides insights towards understanding the mechanism of primer-independent initiation of RNA-dependent RNA polymerization. This RNA polymerase appears to operate without a sigma factor /subunit. The purified Φ6 RdRP displays processive elongation in vitro and self-assembles along with polymerase complex proteins into subviral particles that are fully functional. [ 5 ]
Φ6 has been studied as a model to understand how segmented RNA viruses package their genomes, its structure has been studied by scientists interested in lipid -containing bacteriophages, and it has been used as a model organism to test evolutionary theory such as Muller's ratchet . Phage Φ6 has been used extensively in additional phage experimental evolution studies. | https://en.wikipedia.org/wiki/Pseudomonas_virus_phi6 |
In mathematics , in the field of topology , a topological space is said to be pseudonormal if given two disjoint closed sets in it, one of which is countable , there are disjoint open sets containing them. [ 1 ] Note the following:
An example of a pseudonormal Moore space that is not metrizable was given by F. B. Jones ( 1937 ), in connection with the conjecture that all normal Moore spaces are metrizable. [ 1 ] [ 2 ]
This topology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pseudonormal_space |
In physics , a pseudopotential or effective potential is used as an approximation for the simplified description of complex systems. Applications include atomic physics and neutron scattering . The pseudopotential approximation was first introduced by Hans Hellmann in 1934. [ 1 ]
The pseudopotential is an attempt to replace the complicated effects of the motion of the core (i.e. non- valence ) electrons of an atom and its nucleus with an effective potential , or pseudopotential, so that the Schrödinger equation contains a modified effective potential term instead of the Coulombic potential term for core electrons normally found in the Schrödinger equation.
The pseudopotential is an effective potential constructed to replace the atomic all-electron potential (full-potential) such that core states are eliminated and the valence electrons are described by pseudo-wavefunctions with significantly fewer nodes. This allows the pseudo-wavefunctions to be described with far fewer Fourier modes , thus making plane-wave basis sets practical to use. In this approach usually only the chemically active valence electrons are dealt with explicitly, while the core electrons are 'frozen', being considered together with the nuclei as rigid non-polarizable ion cores. It is possible to self-consistently update the pseudopotential with the chemical environment that it is embedded in, having the effect of relaxing the frozen core approximation, although this is rarely done. In codes using local basis functions, like Gaussian, often effective core potentials are used that only freeze the core electrons.
First-principles pseudopotentials are derived from an atomic reference state, requiring that the pseudo- and all-electron valence eigenstates have the same energies and amplitude (and thus density) outside a chosen core cut-off radius r c {\displaystyle r_{c}} .
Pseudopotentials with larger cut-off radius are said to be softer , that is more rapidly convergent, but at the same time less transferable , that is less accurate to reproduce realistic features in different environments.
Motivation:
Approximations:
Early applications of pseudopotentials to atoms and solids based on attempts to fit atomic spectra achieved only limited success. Solid-state pseudopotentials achieved their present popularity largely because of the successful fits by Walter Harrison to the nearly free electron Fermi surface of aluminum (1958) and by James C. Phillips to the covalent energy gaps of silicon and germanium (1958). Phillips and coworkers (notably Marvin L. Cohen and coworkers) later extended this work to many other semiconductors, in what they called "semiempirical pseudopotentials". [ 4 ]
Norm-conserving and ultrasoft are the two most common forms of pseudopotential used in modern plane-wave electronic structure codes . They allow a basis-set with a significantly lower cut-off (the frequency of the highest Fourier mode) to be used to describe the electron wavefunctions and so allow proper numerical convergence with reasonable computing resources. An alternative would be to augment the basis set around nuclei with atomic-like functions, as is done in LAPW . Norm-conserving pseudopotential was first proposed by Hamann, Schlüter, and Chiang (HSC) in 1979. [ 5 ] The original HSC norm-conserving pseudopotential takes the following form:
where | Y l m ⟩ {\displaystyle |Y_{lm}\rangle } projects a one-particle wavefunction, such as one Kohn-Sham orbital, to the angular momentum labeled by { l , m } {\displaystyle \{l,m\}} . V l m ( r ) {\displaystyle V_{lm}(r)} is the pseudopotential that acts on the projected component. Different angular momentum states then feel different potentials, thus the HSC norm-conserving pseudopotential is non-local, in contrast to local pseudopotential which acts on all one-particle wave-functions in the same way.
Norm-conserving pseudopotentials are constructed to enforce two conditions.
1. Inside the cut-off radius r c {\displaystyle r_{c}} , the norm of each pseudo-wavefunction be identical to its corresponding all-electron wavefunction: [ 6 ]
2. All-electron and pseudo wavefunctions are identical outside cut-off radius r c {\displaystyle r_{c}} .
Ultrasoft pseudopotentials relax the norm-conserving constraint to reduce the necessary basis-set size further at the expense of introducing a generalized eigenvalue problem. [ 7 ] With a non-zero difference in norms we can now define:
and so a normalised eigenstate of the pseudo Hamiltonian now obeys the generalized equation
where the operator S ^ {\displaystyle {\hat {S}}} is defined as
where p R , i {\displaystyle p_{\mathbf {R} ,i}} are projectors that form a dual basis with the pseudo reference states inside the cut-off radius, and are zero outside:
A related technique [ 8 ] is the projector augmented wave (PAW) method .
Enrico Fermi introduced a pseudopotential, V {\displaystyle V} , to describe the scattering of a free neutron by a nucleus. [ 9 ] The scattering is assumed to be s -wave scattering, and therefore spherically symmetric. Therefore, the potential is given as a function of radius, r {\displaystyle r} :
where ℏ {\displaystyle \hbar } is the Planck constant divided by 2 π {\displaystyle 2\pi } , m {\displaystyle m} is the mass , δ ( r ) {\displaystyle \delta (r)} is the Dirac delta function , b {\displaystyle b} is the bound coherent neutron scattering length , and r = 0 {\displaystyle r=0} the center of mass of the nucleus . [ 10 ] The Fourier transform of this δ {\displaystyle \delta } -function leads to the constant neutron form factor .
James Charles Phillips developed a simplified pseudopotential while at Bell Labs useful for describing silicon and germanium. [ 11 ] | https://en.wikipedia.org/wiki/Pseudopotential |
Pseudoproteases are catalytically-deficient pseudoenzyme variants of proteases that are represented across the kingdoms of life. [ 1 ] [ 2 ]
This molecular biology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pseudoprotease |
In cryptography , a pseudorandom ensemble is a family of variables meeting the following criteria:
Let U = { U n } n ∈ N {\displaystyle U=\{U_{n}\}_{n\in \mathbb {N} }} be a uniform ensemble and X = { X n } n ∈ N {\displaystyle X=\{X_{n}\}_{n\in \mathbb {N} }} be an ensemble . The ensemble X {\displaystyle X} is called pseudorandom if X {\displaystyle X} and U {\displaystyle U} are indistinguishable in polynomial time .
This cryptography-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pseudorandom_ensemble |
In theoretical computer science and cryptography , a pseudorandom generator (PRG) for a class of statistical tests is a deterministic procedure that maps a random seed to a longer pseudorandom string such that no statistical test in the class can distinguish between the output of the generator and the uniform distribution. The random seed itself is typically a short binary string drawn from the uniform distribution .
Many different classes of statistical tests have been considered in the literature, among them the class of all Boolean circuits of a given size.
It is not known whether good pseudorandom generators for this class exist, but it is known that their existence is in a certain sense equivalent to (unproven) circuit lower bounds in computational complexity theory .
Hence the construction of pseudorandom generators for the class of Boolean circuits of a given size rests on currently unproven hardness assumptions.
Let A = { A : { 0 , 1 } n → { 0 , 1 } ∗ } {\displaystyle {\mathcal {A}}=\{A:\{0,1\}^{n}\to \{0,1\}^{*}\}} be a class of functions.
These functions are the statistical tests that the pseudorandom generator will try to fool, and they are usually algorithms .
Sometimes the statistical tests are also called adversaries or distinguishers . [ 1 ] The notation in the codomain of the functions is the Kleene star .
A function G : { 0 , 1 } ℓ → { 0 , 1 } n {\displaystyle G:\{0,1\}^{\ell }\to \{0,1\}^{n}} with ℓ < n {\displaystyle \ell <n} is a pseudorandom generator against A {\displaystyle {\mathcal {A}}} with bias ε {\displaystyle \varepsilon } if, for every A {\displaystyle A} in A {\displaystyle {\mathcal {A}}} , the statistical distance between the distributions A ( G ( U ℓ ) ) {\displaystyle A(G(U_{\ell }))} and A ( U n ) {\displaystyle A(U_{n})} is at most ε {\displaystyle \varepsilon } , where U k {\displaystyle U_{k}} is the uniform distribution on { 0 , 1 } k {\displaystyle \{0,1\}^{k}} .
The quantity ℓ {\displaystyle \ell } is called the seed length and the quantity n − ℓ {\displaystyle n-\ell } is called the stretch of the pseudorandom generator.
A pseudorandom generator against a family of adversaries ( A n ) n ∈ N {\displaystyle ({\mathcal {A}}_{n})_{n\in \mathbb {N} }} with bias ε ( n ) {\displaystyle \varepsilon (n)} is a family of pseudorandom generators ( G n ) n ∈ N {\displaystyle (G_{n})_{n\in \mathbb {N} }} , where G n : { 0 , 1 } ℓ ( n ) → { 0 , 1 } n {\displaystyle G_{n}:\{0,1\}^{\ell (n)}\to \{0,1\}^{n}} is a pseudorandom generator against A n {\displaystyle {\mathcal {A}}_{n}} with bias ε ( n ) {\displaystyle \varepsilon (n)} and seed length ℓ ( n ) {\displaystyle \ell (n)} .
In most applications, the family A {\displaystyle {\mathcal {A}}} represents some model of computation or some set of algorithms , and one is interested in designing a pseudorandom generator with small seed length and bias, and such that the output of the generator can be computed by the same sort of algorithm.
In cryptography , the class A {\displaystyle {\mathcal {A}}} usually consists of all circuits of size polynomial in the input and with a single bit output, and one is interested in designing pseudorandom generators that are computable by a polynomial-time algorithm and whose bias is negligible in the circuit size.
These pseudorandom generators are sometimes called cryptographically secure pseudorandom generators (CSPRGs) .
It is not known if cryptographically secure pseudorandom generators exist.
Proving that they exist is difficult since their existence implies P ≠ NP , which is widely believed but a famously open problem.
The existence of cryptographically secure pseudorandom generators is widely believed. This is because it has been proven that pseudorandom generators can be constructed from any one-way function which are believed to exist. [ 2 ] [ 3 ] Pseudorandom generators are necessary for many applications in cryptography .
The pseudorandom generator theorem shows that cryptographically secure pseudorandom generators exist if and only if one-way functions exist.
Pseudorandom generators have numerous applications in cryptography. For instance, pseudorandom generators provide an efficient analog of one-time pads . It is well known that in order to encrypt a message m in a way that the cipher text provides no information on the plaintext , the key k used must be random over strings of length |m|. Perfectly secure encryption is very costly in terms of key length. Key length can be significantly reduced using a pseudorandom generator if perfect security is replaced by semantic security . Common constructions of stream ciphers are based on pseudorandom generators.
Pseudorandom generators may also be used to construct symmetric key cryptosystems , where a large number of messages can be safely encrypted under the same key. Such a construction can be based on a pseudorandom function family , which generalizes the notion of a pseudorandom generator.
In the 1980s, simulations in physics began to use pseudorandom generators to produce sequences with billions of elements, and by the late 1980s, evidence had developed that a few common generators gave incorrect results in such cases as phase transition properties of the 3D Ising model and shapes of diffusion-limited aggregates. Then in the 1990s, various idealizations of physics simulations—based on random walks , correlation functions , localization of eigenstates, etc., were used as tests of pseudorandom generators. [ 4 ]
NIST announced SP800-22 Randomness tests to test whether a pseudorandom generator produces high quality random bits. Yongge Wang showed that NIST testing is not enough to detect weak pseudorandom generators and developed statistical distance based testing technique LILtest. [ 5 ]
A main application of pseudorandom generators lies in the derandomization of computation that relies on randomness, without corrupting the result of the computation.
Physical computers are deterministic machines, and obtaining true randomness can be a challenge.
Pseudorandom generators can be used to efficiently simulate randomized algorithms with using little or no randomness.
In such applications, the class A {\displaystyle {\mathcal {A}}} describes the randomized algorithm or class of randomized algorithms that one wants to simulate, and the goal is to design an "efficiently computable" pseudorandom generator against A {\displaystyle {\mathcal {A}}} whose seed length is as short as possible.
If a full derandomization is desired, a completely deterministic simulation proceeds by replacing the random input to the randomized algorithm with the pseudorandom string produced by the pseudorandom generator.
The simulation does this for all possible seeds and averages the output of the various runs of the randomized algorithm in a suitable way.
A fundamental question in computational complexity theory is whether all polynomial time randomized algorithms for decision problems can be deterministically simulated in polynomial time. The existence of such a simulation would imply that BPP = P . To perform such a simulation, it is sufficient to construct pseudorandom generators against the family F of all circuits of size s ( n ) whose inputs have length n and output a single bit, where s ( n ) is an arbitrary polynomial, the seed length of the pseudorandom generator is O(log n ) and its bias is ⅓.
In 1991, Noam Nisan and Avi Wigderson provided a candidate pseudorandom generator with these properties. In 1997 Russell Impagliazzo and Avi Wigderson proved that the construction of Nisan and Wigderson is a pseudorandom generator assuming that there exists a decision problem that can be computed in time 2 O( n ) on inputs of length n but requires circuits of size 2 Ω( n ) .
While unproven assumption about circuit complexity are needed to prove that the Nisan–Wigderson generator works for time-bounded machines, it is natural to restrict the class of statistical tests further such that we need not rely on such unproven assumptions.
One class for which this has been done is the class of machines whose work space is bounded by O ( log n ) {\displaystyle O(\log n)} .
Using a repeated squaring trick known as Savitch's theorem , it is easy to show that every probabilistic log-space computation can be simulated in space O ( log 2 n ) {\displaystyle O(\log ^{2}n)} . Noam Nisan (1992) showed that this derandomization can actually be achieved with a pseudorandom generator of seed length O ( log 2 n ) {\displaystyle O(\log ^{2}n)} that fools all O ( log n ) {\displaystyle O(\log n)} -space machines.
Nisan's generator has been used by Saks and Zhou (1999) to show that probabilistic log-space computation can be simulated deterministically in space O ( log 1.5 n ) {\displaystyle O(\log ^{1.5}n)} .
This result was improved by William Hoza in 2021 to space O ( log 1.5 n / log log n ) {\displaystyle O(\log ^{1.5}n/{\sqrt {\log \log n}})} .
When the statistical tests consist of all multivariate linear functions over some finite field F {\displaystyle \mathbb {F} } , one speaks of epsilon-biased generators .
The construction of Naor & Naor (1990) achieves a seed length of ℓ = log n + O ( log ( ϵ − 1 ) ) {\displaystyle \ell =\log n+O(\log(\epsilon ^{-1}))} , which is optimal up to constant factors.
Pseudorandom generators for linear functions often serve as a building block for more complicated pseudorandom generators.
Viola (2008) proves that taking the sum of d {\displaystyle d} small-bias generators fools polynomials of degree d {\displaystyle d} .
The seed length is ℓ = d ⋅ log n + O ( 2 d ⋅ log ( ϵ − 1 ) ) {\displaystyle \ell =d\cdot \log n+O(2^{d}\cdot \log(\epsilon ^{-1}))} .
Constant depth circuits that produce a single output bit. [ citation needed ]
The pseudorandom generators used in cryptography and universal algorithmic derandomization have not been proven to exist, although their existence is widely believed [ citation needed ] . Proofs for their existence would imply proofs of lower bounds on the circuit complexity of certain explicit functions. Such circuit lower bounds cannot be proved in the framework of natural proofs assuming the existence of stronger variants of cryptographic pseudorandom generators. [ 6 ] | https://en.wikipedia.org/wiki/Pseudorandom_generator |
In graph theory , a graph is said to be a pseudorandom graph if it obeys certain properties that random graphs obey with high probability . There is no concrete definition of graph pseudorandomness , but there are many reasonable characterizations of pseudorandomness one can consider.
Pseudorandom properties were first formally considered by Andrew Thomason in 1987. [ 1 ] [ 2 ] He defined a condition called "jumbledness": a graph G = ( V , E ) {\displaystyle G=(V,E)} is said to be ( p , α ) {\displaystyle (p,\alpha )} - jumbled for real p {\displaystyle p} and α {\displaystyle \alpha } with 0 < p < 1 ≤ α {\displaystyle 0<p<1\leq \alpha } if
for every subset U {\displaystyle U} of the vertex set V {\displaystyle V} , where e ( U ) {\displaystyle e(U)} is the number of edges among U {\displaystyle U} (equivalently, the number of edges in the subgraph induced by the vertex set U {\displaystyle U} ). It can be shown that the Erdős–Rényi random graph G ( n , p ) {\displaystyle G(n,p)} is almost surely ( p , O ( n p ) ) {\displaystyle (p,O({\sqrt {np}}))} -jumbled. [ 2 ] : 6 However, graphs with less uniformly distributed edges, for example a graph on 2 n {\displaystyle 2n} vertices consisting of an n {\displaystyle n} -vertex complete graph and n {\displaystyle n} completely independent vertices, are not ( p , α ) {\displaystyle (p,\alpha )} -jumbled for any small α {\displaystyle \alpha } , making jumbledness a reasonable quantifier for "random-like" properties of a graph's edge distribution.
Thomason showed that the "jumbled" condition is implied by a simpler-to-check condition, only depending on the codegree of two vertices and not every subset of the vertex set of the graph. Letting codeg ( u , v ) {\displaystyle \operatorname {codeg} (u,v)} be the number of common neighbors of two vertices u {\displaystyle u} and v {\displaystyle v} , Thomason showed that, given a graph G {\displaystyle G} on n {\displaystyle n} vertices with minimum degree n p {\displaystyle np} , if codeg ( u , v ) ≤ n p 2 + ℓ {\displaystyle \operatorname {codeg} (u,v)\leq np^{2}+\ell } for every u {\displaystyle u} and v {\displaystyle v} , then G {\displaystyle G} is ( p , ( p + ℓ ) n ) {\displaystyle \left(p,{\sqrt {(p+\ell )n}}\,\right)} -jumbled. [ 2 ] : 7 This result shows how to check the jumbledness condition algorithmically in polynomial time in the number of vertices, and can be used to show pseudorandomness of specific graphs. [ 2 ] : 7
In the spirit of the conditions considered by Thomason and their alternately global and local nature, several weaker conditions were considered by Chung, Graham, and Wilson in 1989: [ 3 ] a graph G {\displaystyle G} on n {\displaystyle n} vertices with edge density p {\displaystyle p} and some ε > 0 {\displaystyle \varepsilon >0} can satisfy each of these conditions if
These conditions may all be stated in terms of a sequence of graphs { G n } {\displaystyle \{G_{n}\}} where G n {\displaystyle G_{n}} is on n {\displaystyle n} vertices with ( p + o ( 1 ) ) ( n 2 ) {\displaystyle (p+o(1)){\binom {n}{2}}} edges. For example, the 4-cycle counting condition becomes that the number of copies of any graph H {\displaystyle H} in G n {\displaystyle G_{n}} is ( p e ( H ) + o ( 1 ) ) e v ( H ) {\displaystyle \left(p^{e(H)}+o(1)\right)e^{v(H)}} as n → ∞ {\displaystyle n\to \infty } , and the discrepancy condition becomes that | e ( X , Y ) − p | X | | Y | | = o ( n 2 ) {\displaystyle \left|e(X,Y)-p|X||Y|\right|=o(n^{2})} , using little-o notation .
A pivotal result about graph pseudorandomness is the Chung–Graham–Wilson theorem, which states that many of the above conditions are equivalent, up to polynomial changes in ε {\displaystyle \varepsilon } [ 3 ] . A sequence of graphs which satisfies those conditions is called quasi-random . It is considered particularly surprising [ 2 ] : 9 that the weak condition of having the "correct" 4-cycle density implies the other seemingly much stronger pseudorandomness conditions. Graphs such as the 4-cycle, the density of which in a sequence of graphs is sufficient to test the quasi-randomness of the sequence, are known as forcing graphs .
Some implications in the Chung–Graham–Wilson theorem are clear by the definitions of the conditions: the discrepancy on individual sets condition is simply the special case of the discrepancy condition for Y = X {\displaystyle Y=X} , and 4-cycle counting is a special case of subgraph counting. In addition, the graph counting lemma, a straightforward generalization of the triangle counting lemma , implies that the discrepancy condition implies subgraph counting.
The fact that 4-cycle counting implies the codegree condition can be proven by a technique similar to the second-moment method. Firstly, the sum of codegrees can be upper-bounded:
Given 4-cycles, the sum of squares of codegrees is bounded:
Therefore, the Cauchy–Schwarz inequality gives
which can be expanded out using our bounds on the first and second moments of codeg {\displaystyle \operatorname {codeg} } to give the desired bound. A proof that the codegree condition implies the discrepancy condition can be done by a similar, albeit trickier, computation involving the Cauchy–Schwarz inequality.
The eigenvalue condition and the 4-cycle condition can be related by noting that the number of labeled 4-cycles in G {\displaystyle G} is, up to o ( 1 ) {\displaystyle o(1)} stemming from degenerate 4-cycles, tr ( A G 4 ) {\displaystyle \operatorname {tr} \left(A_{G}^{4}\right)} , where A G {\displaystyle A_{G}} is the adjacency matrix of G {\displaystyle G} . The two conditions can then be shown to be equivalent by invocation of the Courant–Fischer theorem . [ 3 ]
The concept of graphs that act like random graphs connects strongly to the concept of graph regularity used in the Szemerédi regularity lemma . For ε > 0 {\displaystyle \varepsilon >0} , a pair of vertex sets X , Y {\displaystyle X,Y} is called ε {\displaystyle \varepsilon } -regular , if for all subsets A ⊂ X , B ⊂ Y {\displaystyle A\subset X,B\subset Y} satisfying | A | ≥ ε | X | , | B | ≥ ε | Y | {\displaystyle |A|\geq \varepsilon |X|,|B|\geq \varepsilon |Y|} , it holds that
where d ( X , Y ) {\displaystyle d(X,Y)} denotes the edge density between X {\displaystyle X} and Y {\displaystyle Y} : the number of edges between X {\displaystyle X} and Y {\displaystyle Y} divided by | X | | Y | {\displaystyle |X||Y|} . This condition implies a bipartite analogue of the discrepancy condition, and essentially states that the edges between A {\displaystyle A} and B {\displaystyle B} behave in a "random-like" fashion. In addition, it was shown by Miklós Simonovits and Vera T. Sós in 1991 that a graph satisfies the above weak pseudorandomness conditions used in the Chung–Graham–Wilson theorem if and only if it possesses a Szemerédi partition where nearly all densities are close to the edge density of the whole graph. [ 4 ]
The Chung–Graham–Wilson theorem, specifically the implication of subgraph counting from discrepancy, does not follow for sequences of graphs with edge density approaching 0 {\displaystyle 0} , or, for example, the common case of d {\displaystyle d} - regular graphs on n {\displaystyle n} vertices as n → ∞ {\displaystyle n\to \infty } . The following sparse analogues of the discrepancy and eigenvalue bounding conditions are commonly considered:
It is generally true that this eigenvalue condition implies the corresponding discrepancy condition, but the reverse is not true: the disjoint union of a random large d {\displaystyle d} -regular graph and a d + 1 {\displaystyle d+1} -vertex complete graph has two eigenvalues of exactly d {\displaystyle d} but is likely to satisfy the discrepancy property. However, as proven by David Conlon and Yufei Zhao in 2017, slight variants of the discrepancy and eigenvalue conditions for d {\displaystyle d} -regular Cayley graphs are equivalent up to linear scaling in ε {\displaystyle \varepsilon } . [ 5 ] One direction of this follows from the expander mixing lemma , while the other requires the assumption that the graph is a Cayley graph and uses the Grothendieck inequality .
A d {\displaystyle d} -regular graph G {\displaystyle G} on n {\displaystyle n} vertices is called an ( n , d , λ ) {\displaystyle (n,d,\lambda )} -graph if, letting the eigenvalues of the adjacency matrix of G {\displaystyle G} be d = λ 1 ≥ λ 2 ≥ ⋯ ≥ λ n {\displaystyle d=\lambda _{1}\geq \lambda _{2}\geq \cdots \geq \lambda _{n}} , max ( | λ 2 | , | λ n | ) ≤ λ {\displaystyle \max \left(\left|\lambda _{2}\right|,\left|\lambda _{n}\right|\right)\leq \lambda } . The Alon-Boppana bound gives that max ( | λ 2 | , | λ n | ) ≥ 2 d − 1 − o ( 1 ) {\displaystyle \max \left(\left|\lambda _{2}\right|,\left|\lambda _{n}\right|\right)\geq 2{\sqrt {d-1}}-o(1)} (where the o ( 1 ) {\displaystyle o(1)} term is as n → ∞ {\displaystyle n\to \infty } ), and Joel Friedman proved that a random d {\displaystyle d} -regular graph on n {\displaystyle n} vertices is ( n , d , λ ) {\displaystyle (n,d,\lambda )} for λ = 2 d − 1 + o ( 1 ) {\displaystyle \lambda =2{\sqrt {d-1}}+o(1)} . [ 6 ] In this sense, how much λ {\displaystyle \lambda } exceeds 2 d − 1 {\displaystyle 2{\sqrt {d-1}}} is a general measure of the non-randomness of a graph. There are graphs with λ ≤ 2 d − 1 {\displaystyle \lambda \leq 2{\sqrt {d-1}}} , which are termed Ramanujan graphs . They have been studied extensively and there are a number of open problems relating to their existence and commonness.
Given an ( n , d , λ ) {\displaystyle (n,d,\lambda )} graph for small λ {\displaystyle \lambda } , many standard graph-theoretic quantities can be bounded to near what one would expect from a random graph. In particular, the size of λ {\displaystyle \lambda } has a direct effect on subset edge density discrepancies via the expander mixing lemma. Other examples are as follows, letting G {\displaystyle G} be an ( n , d , λ ) {\displaystyle (n,d,\lambda )} graph:
Pseudorandom graphs factor prominently in the proof of the Green–Tao theorem . The theorem is proven by transferring Szemerédi's theorem , the statement that a set of positive integers with positive natural density contains arbitrarily long arithmetic progressions, to the sparse setting (as the primes have natural density 0 {\displaystyle 0} in the integers). The transference to sparse sets requires that the sets behave pseudorandomly, in the sense that corresponding graphs and hypergraphs have the correct subgraph densities for some fixed set of small (hyper)subgraphs. [ 9 ] It is then shown that a suitable superset of the prime numbers, called pseudoprimes, in which the primes are dense obeys these pseudorandomness conditions, completing the proof. | https://en.wikipedia.org/wiki/Pseudorandom_graph |
Pseudorationalism was the label given by economist and philosopher Otto Neurath to a school of thought that he was heavily critical of, which relies on an erroneous vision of the process of thinking and moral action. He made these criticisms throughout many of his writings, but primarily in his 1913 paper "The lost wanderers of Descartes and the auxiliary motive" [ 1 ] and later to a lesser extent in his 1935 "Pseudorationalismus der Falsifikation". [ 2 ]
In "The lost wanderers of Descartes and the auxiliary motive", Neurath writes that "Descartes was of the opinion that, in the field of theory, by forming successive series of statements that one has recognised as defmitely true, one could reach a complete picture of the world". [ 1 ] Moreover, especially in the Principles of Philosophy , Descartes sharply distinguished thinking and action, and rejected the possibility of having provisional rules in the moral and practical field, which is an assumption Neurath rejects.
Descartes' approach can thus be metaphorically described as "lost wanderers," suggesting a solitary and introspective journey towards certainty and knowledge, starting from a point of complete skepticism.
Neurath introduced the parable of the boat in another article, 'Protocol statements' (1932), and not in his 1913 paper. This metaphor describes science and knowledge as a never-ending voyage where we must repair our ship at sea, without ever being able to start anew from the ground up. It emphasizes the collective, provisional, and piecemeal nature of scientific endeavor, contrasting sharply with Descartes' pursuit of an indubitable foundation for knowledge.:
There is no way to establish fully secured, neat protocol statements as starting points of the sciences. There is no tabula rasa . We are like sailors who have to rebuild their ship on the open sea, without ever being able to dismantle it in dry-dock and reconstruct it from its best components. [ 3 ]
Thus, pseudorationalism can be understood as a misunderstanding of Descartes' principles, and can lead to a form of cynicism. "Pseudorationalism leads partly to self-deception, partly to hypocrisy". [ 1 ] It is a "belief in powers that regulate existence and foretell the future" and according to Neurath, is similar to superstion. It can be identified to a special form of naive scientism .
The second paper mentioning pseudorationalism was a review of Popper 's first book, Logik der Forschung ( The Logic of Scientific Discovery ), contrasting this approach with his own view of what rationalism should properly be. [ 4 ] Neurath criticises the cumulative conception of knowledge endowed by Popper. For instance, Popper writes that:
For a theory which has been well corroborated can only be superseded by one of a higher level of universality; that is, by a theory which is better testable and which, in addition, contains the old, well corroborated theory—or at least a good approximation to it. It may be better, therefore, to describe that trend—the advance towards theories of an ever higher level of universality—as ‘quasi-inductive’. [ 5 ]
Neurath's criticism addresses the fact that the various steps of gravitational theory can barely be understood as approximations of one theory. He is relying on Pierre Duhem 's work The Aim and Structure of Physical Theory [ 6 ] .
Another aspect of his critique suggests that Popper advocates for a form of traditional absolutism, in which all scientific theories progressively converge towards a comprehensive understanding of the world. [ 7 ] According to Neurath, pseudorationalists, much more successful in the 1930's than it was before, make the mistake of assuming a complete picture of reality, an impossibility which leads them to further false assumptions.
Consequently, the necessity for calculation in kind ushered in a demand for an alternative approach to practical reasoning. This new approach diverged significantly from the precise, astronomical ideals epitomized by Laplace in science, as well as from the rationalist and individualist principles associated with Descartes in philosophy. He termed this departure "pseudorationality," a concept he later identified within Popper's perspectives.
Rationalism is thus an epistemological and political doctrine, destined to fight theses avatars of rationalism. [ 1 ]
Rationalism sees its chief triumph in the clear recognition of the limits of actual insight. I tend to derive the widespread tendency towards pseudo-rationalism from the same unconscious endeavours as the tendency towards superstition. [ 1 ]
Pseudorationalism also refers to Neurath's conception of economics and his criticism of the misusage of the concept of rationality. [ 8 ]
This philosophy of science -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pseudorationalism |
In chemistry , a pseudorotation is a set of intramolecular movements of attached groups (i.e., ligands ) on a highly symmetric molecule , leading to a molecule indistinguishable from the initial one. The International Union of Pure and Applied Chemistry ( IUPAC ) defines a pseudorotation as a " stereoisomerization resulting in a structure that appears to have been produced by rotation of the entire initial molecule", the result of which is a "product" that is "superposable on the initial one, unless different positions are distinguished by substitution, including isotopic substitution ." [ 1 ]
Well-known examples are the intramolecular isomerization of trigonal bipyramidal compounds by the Berry pseudorotation mechanism , and the out-of-plane motions of carbon atoms exhibited by cyclopentane , leading to the interconversions it experiences between its many possible conformers (envelope, twist). [ 2 ] Note, no angular momentum is generated by this motion. [ citation needed ] In these and related examples, a small displacement of the atomic positions leads to a loss of symmetry until the symmetric product re-forms (see image example below), where these displacements are typically along low-energy pathways. [ citation needed ] The Berry mechanism refers to the facile interconversion of axial and equatorial ligand in MX 5 types of compounds, e.g. D 3h -symmetric PF 5 (shown). [ 1 ] [ 3 ] Finally, in a formal sense, the term pseudorotation is intended to refer exclusively to dynamics in symmetrical molecules, though mechanisms of the same type are invoked for lower symmetry molecules as well. [ citation needed ]
This stereochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pseudorotation |
In linear algebra , a pseudoscalar is a quantity that behaves like a scalar , except that it changes sign under a parity inversion [ 1 ] [ 2 ] while a true scalar does not.
A pseudoscalar, when multiplied by an ordinary vector , becomes a pseudovector (or axial vector ); a similar construction creates the pseudotensor .
A pseudoscalar also results from any scalar product between a pseudovector and an ordinary vector. The prototypical example of a pseudoscalar is the scalar triple product , which can be written as the scalar product between one of the vectors in the triple product and the cross product between the two other vectors, where the latter is a pseudovector.
In physics , a pseudoscalar denotes a physical quantity analogous to a scalar . Both are physical quantities which assume a single value which is invariant under proper rotations . However, under the parity transformation , pseudoscalars flip their signs while scalars do not. As reflections through a plane are the combination of a rotation with the parity transformation, pseudoscalars also change signs under reflections.
One of the most powerful ideas in physics is that physical laws do not change when one changes the coordinate system used to describe these laws. That a pseudoscalar reverses its sign when the coordinate axes are inverted suggests that it is not the best object to describe a physical quantity. In 3D-space, quantities described by a pseudovector are antisymmetric tensors of order 2, which are invariant under inversion. The pseudovector may be a simpler representation of that quantity, but suffers from the change of sign under inversion. Similarly, in 3D-space, the Hodge dual of a scalar is equal to a constant times the 3-dimensional Levi-Civita pseudotensor (or "permutation" pseudotensor); whereas the Hodge dual of a pseudoscalar is an antisymmetric (pure) tensor of order three. The Levi-Civita pseudotensor is a completely antisymmetric pseudotensor of order 3. Since the dual of the pseudoscalar is the product of two "pseudo-quantities", the resulting tensor is a true tensor, and does not change sign upon an inversion of axes. The situation is similar to the situation for pseudovectors and antisymmetric tensors of order 2. The dual of a pseudovector is an antisymmetric tensor of order 2 (and vice versa). The tensor is an invariant physical quantity under a coordinate inversion, while the pseudovector is not invariant.
The situation can be extended to any dimension. Generally in an n -dimensional space the Hodge dual of an order r tensor will be an antisymmetric pseudotensor of order ( n − r ) and vice versa. In particular, in the four-dimensional spacetime of special relativity, a pseudoscalar is the dual of a fourth-order tensor and is proportional to the four-dimensional Levi-Civita pseudotensor .
A pseudoscalar in a geometric algebra is a highest- grade element of the algebra. For example, in two dimensions there are two orthogonal basis vectors, e 1 {\displaystyle e_{1}} , e 2 {\displaystyle e_{2}} and the associated highest-grade basis element is
So a pseudoscalar is a multiple of e 12 {\displaystyle e_{12}} . The element e 12 {\displaystyle e_{12}} squares to −1 and commutes with all even elements – behaving therefore like the imaginary scalar i {\displaystyle i} in the complex numbers . It is these scalar-like properties which give rise to its name.
In this setting, a pseudoscalar changes sign under a parity inversion, since if
is a change of basis representing an orthogonal transformation , then
where the sign depends on the determinant of the transformation. Pseudoscalars in geometric algebra thus correspond to the pseudoscalars in physics. | https://en.wikipedia.org/wiki/Pseudoscalar |
In physics and mathematics , a pseudotensor is usually a quantity that transforms like a tensor under an orientation-preserving coordinate transformation (e.g. a proper rotation ) but additionally changes sign under an orientation-reversing coordinate transformation (e.g., an improper rotation ), which is a transformation that can be expressed as a proper rotation followed by reflection . This is a generalization of a pseudovector . To evaluate a tensor or pseudotensor sign, it has to be contracted with some vectors, as many as its rank is, belonging to the space where the rotation is made while keeping the tensor coordinates unaffected (differently from what one does in the case of a base change). Under improper rotation a pseudotensor and a proper tensor of the same rank will have different sign which depends on the rank being even or odd . Sometimes inversion of the axes is used as an example of an improper rotation to see the behaviour of a pseudotensor, but it works only if vector space dimensions is odd otherwise inversion is a proper rotation without an additional reflection.
There is a second meaning for pseudotensor (and likewise for pseudovector ), restricted to general relativity . Tensors obey strict transformation laws, but pseudotensors in this sense are not so constrained. Consequently, the form of a pseudotensor will, in general, change as the frame of reference is altered. An equation containing pseudotensors which holds in one frame will not necessarily hold in a different frame. This makes pseudotensors of limited relevance because equations in which they appear are not invariant in form.
Mathematical developments in the 1980s have allowed pseudotensors to be understood as sections of jet bundles .
Two quite different mathematical objects are called a pseudotensor in different contexts.
The first context is essentially a tensor multiplied by an extra sign factor, such that the pseudotensor changes sign under reflections when a normal tensor does not. According to one definition, a pseudotensor P of the type ( p , q ) {\displaystyle (p,q)} is a geometric object whose components in an arbitrary basis are enumerated by ( p + q ) {\displaystyle (p+q)} indices and obey the transformation rule P ^ j 1 … j p i 1 … i q = ( − 1 ) A A i 1 k 1 ⋯ A i q k q B l 1 j 1 ⋯ B l p j p P l 1 … l p k 1 … k q {\displaystyle {\hat {P}}_{\,j_{1}\ldots j_{p}}^{i_{1}\ldots i_{q}}=(-1)^{A}A^{i_{1}}{}_{k_{1}}\cdots A^{i_{q}}{}_{k_{q}}B^{l_{1}}{}_{j_{1}}\cdots B^{l_{p}}{}_{j_{p}}P_{l_{1}\ldots l_{p}}^{k_{1}\ldots k_{q}}} under a change of basis. [ 1 ] [ 2 ] [ 3 ]
Here P ^ j 1 … j p i 1 … i q , P l 1 … l p k 1 … k q {\displaystyle {\hat {P}}_{\,j_{1}\ldots j_{p}}^{i_{1}\ldots i_{q}},P_{l_{1}\ldots l_{p}}^{k_{1}\ldots k_{q}}} are the components of the pseudotensor in the new and old bases, respectively, A i q k q {\displaystyle A^{i_{q}}{}_{k_{q}}} is the transition matrix for the contravariant indices, B l p j p {\displaystyle B^{l_{p}}{}_{j_{p}}} is the transition matrix for the covariant indices, and ( − 1 ) A = s i g n ( det ( A i q k q ) ) = ± 1 . {\displaystyle (-1)^{A}=\mathrm {sign} \left(\det \left(A^{i_{q}}{}_{k_{q}}\right)\right)=\pm {1}.} This transformation rule differs from the rule for an ordinary tensor only by the presence of the factor ( − 1 ) A . {\displaystyle (-1)^{A}.}
The second context where the word "pseudotensor" is used is general relativity . In that theory, one cannot describe the energy and momentum of the gravitational field by an energy–momentum tensor. Instead, one introduces objects that behave as tensors only with respect to restricted coordinate transformations. Strictly speaking, such objects are not tensors at all. A famous example of such a pseudotensor is the Landau–Lifshitz pseudotensor .
On non-orientable manifolds , one cannot define a volume form globally due to the non-orientability, but one can define a volume element , which is formally a density , and may also be called a pseudo-volume form , due to the additional sign twist (tensoring with the sign bundle). The volume element is a pseudotensor density according to the first definition.
A change of variables in multi-dimensional integration may be achieved through the incorporation of a factor of the absolute value of the determinant of the Jacobian matrix . The use of the absolute value introduces a sign change for improper coordinate transformations to compensate for the convention of keeping integration (volume) element positive; as such, an integrand is an example of a pseudotensor density according to the first definition.
The Christoffel symbols of an affine connection on a manifold can be thought of as the correction terms to the partial derivatives of a coordinate expression of a vector field with respect to the coordinates to render it the vector field's covariant derivative. While the affine connection itself doesn't depend on the choice of coordinates, its Christoffel symbols do, making them a pseudotensor quantity according to the second definition. | https://en.wikipedia.org/wiki/Pseudotensor |
Pseudotropine ( 3β-tropanol , ψ-tropine , 3-pseudotropanol , or PTO ) is a derivative of tropane and an isomer of tropine . Pseudotropine can be found in the Coca plant along with several other alkaloids [ 1 ]
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pseudotropine |
In physics and mathematics , a pseudovector (or axial vector ) [ 2 ] is a quantity that transforms like a vector under continuous rigid transformations such as rotations or translations , but which does not transform like a vector under certain discontinuous rigid transformations such as reflections . For example, the angular velocity of a rotating object is a pseudovector because, when the object is reflected in a mirror, the reflected image rotates in such a way so that its angular velocity "vector" is not the mirror image of the angular velocity "vector" of the original object; for true vectors (also known as polar vectors ), the reflection "vector" and the original "vector" must be mirror images. [ 3 ]
One example of a pseudovector is the normal to an oriented plane . An oriented plane can be defined by two non-parallel vectors, a and b , [ 4 ] that span the plane. The vector a × b is a normal to the plane (there are two normals, one on each side – the right-hand rule will determine which), and is a pseudovector. This has consequences in computer graphics, where it has to be considered when transforming surface normals .
In three dimensions, the curl of a polar vector field at a point and the cross product of two polar vectors are pseudovectors. [ 5 ]
A number of quantities in physics behave as pseudovectors rather than polar vectors, including magnetic field and torque . In mathematics, in three dimensions, pseudovectors are equivalent to bivectors , from which the transformation rules of pseudovectors can be derived. More generally, in n -dimensional geometric algebra , pseudovectors are the elements of the algebra with dimension n − 1 , written ⋀ n −1 R n . The label "pseudo-" can be further generalized to pseudoscalars and pseudotensors , both of which gain an extra sign-flip under improper rotations compared to a true scalar or tensor .
Physical examples of pseudovectors include angular velocity , [ 4 ] angular acceleration , angular momentum , [ 4 ] torque , [ 4 ] magnetic field , [ 4 ] and magnetic dipole moment .
Consider the pseudovector angular momentum L = Σ( r × p ) . Driving in a car, and looking forward, each of the wheels has an angular momentum vector pointing to the left (by the right-hand rule ). If the world is reflected in a mirror which switches the left and right side of the car, the "reflection" of this angular momentum "vector" (viewed as an ordinary vector) points to the right, but the actual angular momentum vector of the wheel (which is still turning forward in the reflection) still points to the left (by the right-hand rule ), corresponding to the extra sign flip in the reflection of a pseudovector.
The distinction between polar vectors and pseudovectors becomes important in understanding the effect of symmetry on the solution to physical systems . Consider an electric current loop in the z = 0 plane that inside the loop generates a magnetic field oriented in the z direction. This system is symmetric (invariant) under mirror reflections through this plane, with the magnetic field unchanged by the reflection. But reflecting the magnetic field as a vector through that plane would be expected to reverse it; this expectation is corrected by realizing that the magnetic field is a pseudovector, with the extra sign flip leaving it unchanged.
In physics, pseudovectors are generally the result of taking the cross product of two polar vectors or the curl of a polar vector field. The cross product and curl are defined, by convention, according to the right hand rule, but could have been just as easily defined in terms of a left-hand rule. The entire body of physics that deals with (right-handed) pseudovectors and the right hand rule could be replaced by using (left-handed) pseudovectors and the left hand rule without issue. The (left) pseudovectors so defined would be opposite in direction to those defined by the right-hand rule.
While vector relationships in physics can be expressed in a coordinate-free manner, a coordinate system is required in order to express vectors and pseudovectors as numerical quantities. Vectors are represented as ordered triplets of numbers: e.g. a = ( a x , a y , a z ) {\displaystyle \mathbf {a} =(a_{x},a_{y},a_{z})} , and pseudovectors are represented in this form too. When transforming between left and right-handed coordinate systems, representations of pseudovectors do not transform as vectors, and treating them as vector representations will cause an incorrect sign change, so that care must be taken to keep track of which ordered triplets represent vectors, and which represent pseudovectors. This problem does not exist if the cross product of two vectors is replaced by the exterior product of the two vectors, which yields a bivector which is a 2nd rank tensor and is represented by a 3×3 matrix. This representation of the 2-tensor transforms correctly between any two coordinate systems, independently of their handedness.
The definition of a "vector" in physics (including both polar vectors and pseudovectors) is more specific than the mathematical definition of "vector" (namely, any element of an abstract vector space ). Under the physics definition, a "vector" is required to have components that "transform" in a certain way under a proper rotation : In particular, if everything in the universe were rotated, the vector would rotate in exactly the same way. (The coordinate system is fixed in this discussion; in other words this is the perspective of active transformations .) Mathematically, if everything in the universe undergoes a rotation described by a rotation matrix R , so that a displacement vector x is transformed to x ′ = R x , then any "vector" v must be similarly transformed to v ′ = R v . This important requirement is what distinguishes a vector (which might be composed of, for example, the x -, y -, and z -components of velocity ) from any other triplet of physical quantities (For example, the length, width, and height of a rectangular box cannot be considered the three components of a vector, since rotating the box does not appropriately transform these three components.)
(In the language of differential geometry , this requirement is equivalent to defining a vector to be a tensor of contravariant rank one. In this more general framework, higher rank tensors can also have arbitrarily many and mixed covariant and contravariant ranks at the same time, denoted by raised and lowered indices within the Einstein summation convention .)
A basic and rather concrete example is that of row and column vectors under the usual matrix multiplication operator: in one order they yield the dot product, which is just a scalar and as such a rank zero tensor, while in the other they yield the dyadic product , which is a matrix representing a rank two mixed tensor, with one contravariant and one covariant index. As such, the noncommutativity of standard matrix algebra can be used to keep track of the distinction between covariant and contravariant vectors. This is in fact how the bookkeeping was done before the more formal and generalised tensor notation came to be. It still manifests itself in how the basis vectors of general tensor spaces are exhibited for practical manipulation.
The discussion so far only relates to proper rotations, i.e. rotations about an axis. However, one can also consider improper rotations , i.e. a mirror-reflection possibly followed by a proper rotation. (One example of an improper rotation is inversion through a point in 3-dimensional space.) Suppose everything in the universe undergoes an improper rotation described by the improper rotation matrix R , so that a position vector x is transformed to x ′ = R x . If the vector v is a polar vector, it will be transformed to v ′ = R v . If it is a pseudovector, it will be transformed to v ′ = − R v .
The transformation rules for polar vectors and pseudovectors can be compactly stated as
where the symbols are as described above, and the rotation matrix R can be either proper or improper. The symbol det denotes determinant ; this formula works because the determinant of proper and improper rotation matrices are +1 and −1, respectively.
Suppose v 1 and v 2 are known pseudovectors, and v 3 is defined to be their sum, v 3 = v 1 + v 2 . If the universe is transformed by a rotation matrix R , then v 3 is transformed to
So v 3 is also a pseudovector. Similarly one can show that the difference between two pseudovectors is a pseudovector, that the sum or difference of two polar vectors is a polar vector, that multiplying a polar vector by any real number yields another polar vector, and that multiplying a pseudovector by any real number yields another pseudovector.
On the other hand, suppose v 1 is known to be a polar vector, v 2 is known to be a pseudovector, and v 3 is defined to be their sum, v 3 = v 1 + v 2 . If the universe is transformed by an improper rotation matrix R , then v 3 is transformed to
Therefore, v 3 is neither a polar vector nor a pseudovector (although it is still a vector, by the physics definition). For an improper rotation, v 3 does not in general even keep the same magnitude:
If the magnitude of v 3 were to describe a measurable physical quantity, that would mean that the laws of physics would not appear the same if the universe was viewed in a mirror. In fact, this is exactly what happens in the weak interaction : Certain radioactive decays treat "left" and "right" differently, a phenomenon which can be traced to the summation of a polar vector with a pseudovector in the underlying theory. (See parity violation .)
For a rotation matrix R , either proper or improper, the following mathematical equation is always true:
where v 1 and v 2 are any three-dimensional vectors. (This equation can be proven either through a geometric argument or through an algebraic calculation.) Similarly, if v is any vector field, the following equation is always true:
where ∇ × denotes the curl operation from vector calculus.
Suppose v 1 and v 2 are known polar vectors, and v 3 is defined to be their cross product, v 3 = v 1 × v 2 . If the universe is transformed by a rotation matrix R , then v 3 is transformed to
So v 3 is a pseudovector. Likewise, one can show that the cross product of two pseudovectors is a pseudovector and the cross product of a polar vector with a pseudovector is a polar vector. In conclusion, we have:
This is isomorphic to addition modulo 2, where "polar" corresponds to 1 and "pseudo" to 0.
Similarly, if v 1 is any known polar vector field and v 2 is defined to be its curl v 2 = ∇ × v 1 , then if the universe is transformed by the rotation matrix R , v 2 is transformed to
So v 2 is a pseudovector field. Likewise, one can show that the curl of a pseudovector field is a polar vector field. In conclusion, we have:
This is like the above rule for cross-products if one interprets the del operator ∇ as a polar vector.
From the definition, it is clear that linear displacement is a polar vector. Linear velocity is linear displacement (a polar vector) divided by time (a scalar), so is also a polar vector. Linear momentum is linear velocity (a polar vector) times mass (a scalar), so is a polar vector. Angular momentum (in a point object) is the cross product of linear displacement (a polar vector) and linear momentum (a polar vector), and is therefore a pseudovector. Torque is angular momentum (a pseudovector) divided by time (a scalar), so is also a pseudovector. Angular velocity (in a rotating body or fluid) is one-half times the curl of linear velocity (a polar vector field), and thus is a pseudovector. Continuing this way, it is straightforward to classify any of the common vectors in physics as either a pseudovector or a polar vector. (There are the parity-violating vectors in the theory of weak-interactions, which are neither polar vectors nor pseudovectors. However, these occur very rarely in physics.)
Above, pseudovectors have been discussed using active transformations . An alternate approach, more along the lines of passive transformations , is to keep the universe fixed, but switch " right-hand rule " with "left-hand rule" everywhere in math and physics, including in the definition of the cross product and the curl . Any polar vector (e.g., a translation vector) would be unchanged, but pseudovectors (e.g., the magnetic field at a point) would switch signs. Nevertheless, there would be no physical consequences, apart from in the parity-violating phenomena such as certain radioactive decays . [ 6 ]
One way to formalize pseudovectors is as follows: if V is an n - dimensional vector space, then a pseudovector of V is an element of the ( n − 1)-th exterior power of V : ⋀ n −1 ( V ). The pseudovectors of V form a vector space with the same dimension as V .
This definition is not equivalent to that requiring a sign flip under improper rotations, but it is general to all vector spaces. In particular, when n is even , such a pseudovector does not experience a sign flip, and when the characteristic of the underlying field of V is 2, a sign flip has no effect. Otherwise, the definitions are equivalent, though it should be borne in mind that without additional structure (specifically, either a volume form or an orientation ), there is no natural identification of ⋀ n −1 ( V ) with V .
Another way to formalize them is by considering them as elements of a representation space for O ( n ) {\displaystyle {\text{O}}(n)} . Vectors transform in the fundamental representation of O ( n ) {\displaystyle {\text{O}}(n)} with data given by ( R n , ρ fund , O ( n ) ) {\displaystyle (\mathbb {R} ^{n},\rho _{\text{fund}},{\text{O}}(n))} , so that for any matrix R {\displaystyle R} in O ( n ) {\displaystyle {\text{O}}(n)} , one has ρ fund ( R ) = R {\displaystyle \rho _{\text{fund}}(R)=R} . Pseudovectors transform in a pseudofundamental representation ( R n , ρ pseudo , O ( n ) ) {\displaystyle (\mathbb {R} ^{n},\rho _{\text{pseudo}},{\text{O}}(n))} , with ρ pseudo ( R ) = det ( R ) R {\displaystyle \rho _{\text{pseudo}}(R)=\det(R)R} . Another way to view this homomorphism for n {\displaystyle n} odd is that in this case O ( n ) ≅ SO ( n ) × Z 2 {\displaystyle {\text{O}}(n)\cong {\text{SO}}(n)\times \mathbb {Z} _{2}} . Then ρ pseudo {\displaystyle \rho _{\text{pseudo}}} is a direct product of group homomorphisms; it is the direct product of the fundamental homomorphism on SO ( n ) {\displaystyle {\text{SO}}(n)} with the trivial homomorphism on Z 2 {\displaystyle \mathbb {Z} _{2}} .
In geometric algebra the basic elements are vectors, and these are used to build a hierarchy of elements using the definitions of products in this algebra. In particular, the algebra builds pseudovectors from vectors.
The basic multiplication in the geometric algebra is the geometric product , denoted by simply juxtaposing two vectors as in ab . This product is expressed as:
where the leading term is the customary vector dot product and the second term is called the wedge product or exterior product . Using the postulates of the algebra, all combinations of dot and wedge products can be evaluated. A terminology to describe the various combinations is provided. For example, a multivector is a summation of k -fold wedge products of various k -values. A k -fold wedge product also is referred to as a k -blade .
In the present context the pseudovector is one of these combinations. This term is attached to a different multivector depending upon the dimensions of the space (that is, the number of linearly independent vectors in the space). In three dimensions, the most general 2-blade or bivector can be expressed as the wedge product of two vectors and is a pseudovector. [ 7 ] In four dimensions, however, the pseudovectors are trivectors . [ 8 ] In general, it is a ( n − 1) -blade, where n is the dimension of the space and algebra. [ 9 ] An n -dimensional space has n basis vectors and also n basis pseudovectors. Each basis pseudovector is formed from the outer (wedge) product of all but one of the n basis vectors. For instance, in four dimensions where the basis vectors are taken to be { e 1 , e 2 , e 3 , e 4 }, the pseudovectors can be written as: { e 234 , e 134 , e 124 , e 123 }.
The transformation properties of the pseudovector in three dimensions has been compared to that of the vector cross product by Baylis. [ 10 ] He says: "The terms axial vector and pseudovector are often treated as synonymous, but it is quite useful to be able to distinguish a bivector from its dual." To paraphrase Baylis: Given two polar vectors (that is, true vectors) a and b in three dimensions, the cross product composed from a and b is the vector normal to their plane given by c = a × b . Given a set of right-handed orthonormal basis vectors { e ℓ } , the cross product is expressed in terms of its components as:
where superscripts label vector components. On the other hand, the plane of the two vectors is represented by the exterior product or wedge product, denoted by a ∧ b . In this context of geometric algebra, this bivector is called a pseudovector, and is the Hodge dual of the cross product. [ 11 ] The dual of e 1 is introduced as e 23 ≡ e 2 e 3 = e 2 ∧ e 3 , and so forth. That is, the dual of e 1 is the subspace perpendicular to e 1 , namely the subspace spanned by e 2 and e 3 . With this understanding, [ 12 ]
For details, see Hodge star operator § Three dimensions . The cross product and wedge product are related by:
where i = e 1 ∧ e 2 ∧ e 3 is called the unit pseudoscalar . [ 13 ] [ 14 ] It has the property: [ 15 ]
Using the above relations, it is seen that if the vectors a and b are inverted by changing the signs of their components while leaving the basis vectors fixed, both the pseudovector and the cross product are invariant. On the other hand, if the components are fixed and the basis vectors e ℓ are inverted, then the pseudovector is invariant, but the cross product changes sign. This behavior of cross products is consistent with their definition as vector-like elements that change sign under transformation from a right-handed to a left-handed coordinate system, unlike polar vectors.
As an aside, it may be noted that not all authors in the field of geometric algebra use the term pseudovector, and some authors follow the terminology that does not distinguish between the pseudovector and the cross product. [ 16 ] However, because the cross product does not generalize to other than three dimensions, [ 17 ] the notion of pseudovector based upon the cross product also cannot be extended to a space of any other number of dimensions. The pseudovector as a ( n – 1) -blade in an n -dimensional space is not restricted in this way.
Another important note is that pseudovectors, despite their name, are "vectors" in the sense of being elements of a vector space . The idea that "a pseudovector is different from a vector" is only true with a different and more specific definition of the term "vector" as discussed above. | https://en.wikipedia.org/wiki/Pseudovector |
PsiQuantum, Corp. (formerly PsiQ) [ 3 ] is an American quantum computing company based in Palo Alto , California . It is developing a general-purpose silicon photonic quantum computer. [ 4 ] [ 5 ] [ 6 ]
PsiQuantum was co-founded in 2016 by Jeremy O'Brien , Terry Rudolph , Peter Shadbolt, and Mark Thompson. They are or were professors and researchers at the University of Bristol and Imperial College London , England . [ 7 ] [ 8 ]
As of July 2021, PsiQuantum was reported to have raised $665 million from investors at a valuation of $3.15 billion. [ 1 ] Its investors include BlackRock , Baillie Gifford , and Microsoft 's venture fund M12 .
In 2022, PsiQuantum and GlobalFoundries received U.S. federal funding for quantum computer research and development. [ 9 ] [ 10 ] [ 11 ] PsiQuantum also entered into a collaboration with the Air Force Research Laboratory . [ 12 ] [ 13 ]
In 2023, DARPA selected PsiQuantum as one of the companies to receive funding under its Underexplored Systems for Utility-Scale Quantum Computing (US2QC) program. [ 14 ] The UK Government also provided funding for PsiQuantum to open a test facility for cryogenic testing in the UK. [ 15 ] [ 16 ]
In 2024, the Australian Commonwealth and Queensland governments announced a A$ 940 million investment into the company via share equity (US$250 million) and loans. [ 2 ] to build the world's first utility-scale, fault-tolerant quantum computer in Brisbane, Queensland. PsiQuantum stated that it had an aggressive plan to have the system operational by the end of 2027. [ 17 ] In July 2024, PsiQuantum announced it had signed a memorandum of understanding (MOU) with five Queensland universities (The University of Queensland, Griffith University, Queensland University of Technology, University of Southern Queensland and the University of the Sunshine Coast) to develop educational programs in quantum fields and collaborate on research projects. [ 18 ]
Later in July 2024, PsiQuantum announced it would be partnering with the State of Illinois, Cook County, and the City of Chicago to anchor Governor JB Pritzker's new Illinois Quantum and Microelectronics Park. [ 19 ]
In March 2025, PsiQuantum raised an additional $750 million, at a valuation of $6 billion. [ 20 ] | https://en.wikipedia.org/wiki/PsiQuantum |
The Psion Wavefinder was a computer peripheral for receiving digital audio broadcasting radio signals, made by Psion . It attached via USB to a personal computer, and had no loudspeakers or controls of its own, with only a flashing light on the device. Psion hoped it would become a design classic. [ 1 ]
The Wavefinder was released on 17 October 2000, [ 2 ] [ 3 ] and gave access to both DAB audio and DAB Data services. The WaveFinder software had the ability to receive the 'Broadcast Website' service which some DAB broadcasters experimented with during the early days of digital radio - displaying HTML content provided by the broadcaster in the users' web browsers. The device initially retailed for £299 (at the time the cheapest digital radio on the UK market) [ 4 ] and was bundled with new PCs sold by Dixons, [ 3 ] but it was quickly discounted, retailing at £49.99 by that December, [ 5 ] and was no longer produced by 2002. [ 6 ] Psion ended support in 2004. The Wavefinder had frequent software problems, and an unofficial patch called WaveLite was released in 2001. [ 2 ] The Wavefinder had widely reported problems with the USB drivers in Windows XP Service Pack 2.
It will work with Windows XP with SP3. There are reports that the Wavefinder works in Vista. It is not compatible with Macintosh computers, except perhaps through the use of a PC emulator or virtualisation solution. There is a driver and command line application for Linux , OpenDAB, [ 7 ] that describes itself as "experimental". The unregulated power supply which is supplied with the Wavefinder has an output which can go up to 19 volts, this can cause the Wavefinder to develop faults. It is therefore recommended to use a regulated 12 volt supply. [ original research? ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Psion_Wavefinder |
In telecommunications , a psophometer is an instrument that measures the perceptible noise of a telephone circuit. [ 1 ]
The core of the meter is based on a true RMS voltmeter , which measures the level of the noise signal. This was used for the first psophometers, in the 1930s. [ 2 ] As the human-perceived level of noise is more important for telephony than their raw voltage, a modern psophometer incorporates a weighting network to represent this perception. [ 1 ] [ 2 ] [ 3 ] The characteristics of the weighting network depend on the type of circuit under investigation, such as whether the circuit is used to normal speech standards (300 Hz – 3.3 kHz), or for high-fidelity broadcast-quality sound (50 Hz – 15 kHz). [ 1 ]
The name was coined in the 1930s, on a basis from Ancient Greek : ψόφος , romanized : psóphos , lit. 'noise', itself derived from Ancient Greek : ψό , lit. 'an exclamation of disgust'. [ 4 ] It is unrelated to Ancient Greek : σοφός , romanized : sóphos , lit. 'wisdom'.
The '-meter' suffix Ancient Greek : μέτρον , romanized : métron , lit. 'tool for measuring' was already widely used in English, but also derives originally from Greek. [ 4 ] | https://en.wikipedia.org/wiki/Psophometer |
Psophometric voltage is a circuit noise voltage measured with a psophometer that includes a CCIF-1951 [ 1 ] weighting network .
"Psophometric voltage" should not be confused with "psophometric emf," i.e. , the emf in a generator or line with 600 Ω internal resistance. For practical purposes, the psophometric emf is twice the corresponding psophometric voltage.
Psophometric voltage readings, V , in millivolts, are commonly converted to dBm (psoph) by dBm(psoph) = 20 log 10 V – 57.78.
This electronics-related article is a stub . You can help Wikipedia by expanding it .
This article related to telecommunications is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Psophometric_voltage |
Researchers have noted the relationship between psychedelics and ecology , particularly in relation to the altered states of consciousness (ASC) produced by psychedelic drugs and the perception of interconnectedness expressed through ecological ideas and themes produced by the psychedelic experience. This is felt through the direct experience of the unity of nature and the environment of which the individual is no longer perceived as separate but intimately connected and embedded inside. [ 1 ]
Swiss chemist Albert Hofmann , the first person to synthesize LSD , believed that the drug made one aware and sensitive to "the magnificence of nature and of the animal and plant kingdom" and the role of humanity in relation to nature. [ 2 ] Stanley Krippner and David Luke have speculated that "the consumption of psychedelic substances leads to an increased concern for nature and ecological issues". [ 3 ] As a result, American psychologist Ralph Metzner and several others have argued that psychedelic drug use was the impetus for the modern ecology movement in the late 1960s. [ 1 ]
In the context of the psychedelic experience, the term ecology is used to refer to two concepts: how organisms relate to themselves and their environment and the concept of the political movement that seeks to protect the environment. The psychedelic experience is said to result in the direct realization of the fundamental concept of interconnectedness such as the kind found in ecological relationships . Subjects undergoing an LSD psychedelic therapy session in a controlled, laboratory setting report boundary dissolution and the feeling of unity with nature during a psychedelic peak experience . [ 4 ] Vollenweider & Kometer (2010) note that measuring the "feelings of unity with the environment" can now be reliably assessed using the five-dimensional altered states of consciousness rating scale (5D-ASC) of which "oceanic boundlessness" is the primary dimension. [ 5 ] Research by Lerner & Lyvers (2006) and Studerus et al. (2010) show that the self-reported values and beliefs of psychedelic drug users indicate a higher concern for the environment than both non-users and users of other illegal drugs. It is unclear from the research whether the concern for the environment preceded the psychedelic experience or came about as a result of it. [ 6 ] Conversely, Lester Grinspoon reports that ecological awareness may result in psychedelic drug users forgoing the drug and non-users staying away from it entirely to remain "pure". In other words, ecological awareness may not precipitate psychedelic drug use, but may actually discourage it. [ 7 ]
It is likely that humans have consumed psychoactive plants in the ritual context of shamanism for thousands of years prior to the advent of Western civilization and the supplanting of indigenous cultural values. [ 8 ] [ 9 ] Anthropological archaeologist Gerardo Reichel-Dolmatoff studied the shamanic rituals of the indigenous Tucano people of South America and found that their shamanic practices primarily served to maintain ecological balance in the rainforest habitat. [ 10 ] Experts speculate that the ecological values of shamanism are an attribute of the psychedelic experience. [ 8 ]
Those who ingest psychoactive drugs often report similar experiences of ecological awareness. Swiss chemist Albert Hofmann, Norwegian philosopher Arne Næss , British religious studies scholar Graham Harvey, and American mycologist Paul Stamets have all written about the shared ecological message of the psychedelic experience. [ 11 ] [ 3 ] The back-to-the-land movement and the creation of rural intentional communities by the hippie counterculture of the 1960s was in part due to the wide use of psychedelic drugs which people felt helped them get in touch with nature. [ 12 ]
Utopian novels of the 1960s and 1970s illustrated this interrelationship between psychedelic drugs and ecological values. Aldous Huxley 's novel Island (1962) portrayed a utopian society that used psychedelic mushrooms while espousing ecological beliefs. The inhabitants believed that if they treated nature well, nature would treat them well in return; and if they hurt nature, nature would destroy them. [ 13 ] The novel, according to Ronald T. Sion, "reflected the mood of the rebellious American youth of the 1960s, particularly in their search for a communal life that promoted ecological principles." [ 14 ] Gerd Rohman called Island a "seminal influence on modern ecological thought." [ 14 ] More than a decade later, American writer Ernest Callenbach presented a similar story in Ecotopia (1975). In the novel, the members of Ecotopia secede from the United States to create an ecological utopia in the Pacific Northwest . Leslie Paul Thiele notes that in Ecotopia, the society actively uses and cultivates cannabis. "Like Huxley’s islanders", Thiele writes, the members of Ecotopia "facilitate ecological attunement through higher states of consciousness." [ 15 ] The notion that cannabis use is related to ecological awareness can be found in the belief systems of groups like the Rastafari movement , who maintain that cannabis use brings them "closer to the earth". [ 16 ] In more recent times, the ecologist movement Extinction Rebellion has been allegedly founded after a psychedelic experience. [ 17 ] | https://en.wikipedia.org/wiki/Psychedelics_and_ecology |
Psychochemical warfare involves the use of psychopharmacological agents ( mind-altering drugs or chemicals ) with the intention of incapacitating an adversary through the temporary induction of hallucinations or delirium . [ 1 ] [ 2 ] These agents, often called " drug weapons ", are generally considered chemical weapons and, more narrowly, constitute a specific type of incapacitating agent .
Although never developed into an effective weapons system, psychochemical warfare theory and research—along with overlapping mind control drug research—was secretly pursued in the mid-20th century by the US military and Central Intelligence Agency (CIA) in the context of the Cold War . These research programs were ended when they came to light and generated controversy in the 1970s. The degree to which the Soviet Union developed or deployed similar agents during the same period remains largely unknown.
The use of chemicals to induce altered states of mind dates back to antiquity and includes the use of plants such as thornapple ( Datura stramonium ) that contain combinations of anticholinergic alkaloids . In 184 B.C., Hannibal 's army used belladonna plants to induce disorientation. [ 3 ]
Records indicate that in 1611, in the British Jamestown Colony of Virginia, an unidentified, but toxic and hallucinogenic, drug derived from local plants was deployed with some success against the white settlers by Chief Powhatan . [ 4 ]
In 1881, members of a French railway surveying expedition crossing Tuareg territory in North Africa ate dried dates that tribesmen had apparently deliberately contaminated with Egyptian henbane ( Hyoscyamus muticus , or H. falezlez ), to devastating effect. [ 5 ]
In the 1950s, the CIA investigated LSD (lysergic acid diethylamide) as part of its Project MKUltra . In the same period, the US Army undertook the secret Edgewood Arsenal human experiments which grew out of the U.S. chemical warfare program and involved studies of several hundred volunteer test subjects. Britain was also investigating the possible use of LSD and the chemical BZ ( 3-quinuclidinyl benzilate ) as nonlethal battlefield drug-weapons. [ 1 ] The United States eventually weaponized BZ for delivery in the M43 BZ cluster bomb until stocks were destroyed in 1989. Both the US and Britain concluded that the desired effects of drug weapons were unpredictable under battlefield conditions and gave up experimentation.
Reports of drug weapons associated with the Soviet bloc have been considered unreliable given the apparent absence of documentation in state archives. [ 6 ] [ 2 ] [ 7 ] | https://en.wikipedia.org/wiki/Psychochemical_warfare |
Various researchers have undertaken efforts to examine the psychological effects of Internet use . Some research employs studying brain functions in Internet users. Some studies assert that these changes are harmful, while others argue that asserted changes are beneficial. [ 1 ]
American writer Nicholas Carr asserts that Internet use reduces the deep thinking that leads to true creativity . He also says that hyperlinks and overstimulation means that the brain must give most of its attention to short-term decisions. Carr also states that the vast availability of information on the World Wide Web overwhelms the brain and hurts long-term memory . He says that the availability of stimuli leads to a very large cognitive load , which makes it difficult to remember anything. [ 2 ] [ 3 ]
Computer scientist Ramesh Sitaraman has asserted that Internet users are impatient and are likely to get more impatient with time. [ 7 ] In a large-scale research study [ 4 ] [ 8 ] that completed in 2012 involving millions of users watching videos on the Internet, Krishnan and Sitaraman show that users start to abandon online videos if they do not start playing within two seconds. [ 9 ] In addition, users with faster Internet connections (such as FTTH ) showed less patience and abandoned videos at a faster rate than users with slower Internet connections. Many commentators have since argued that these results provide a glimpse into the future: as Internet services become faster and provide more instant gratification, people become less patient [ 5 ] [ 6 ] and less able to delay gratification and work towards longer-term rewards. [ 10 ]
Psychologist Steven Pinker , however, argues that people have control over what they do, and that research and reasoning never came naturally to people. He says that "experience does not revamp the basic information-processing capacities of the brain" and asserts that the Internet is actually making people smarter. [ 11 ]
The BBC describes the research published in the peer-reviewed science journal PLoS ONE :
Specialised MRI brain scans showed changes in the white matter of the brain—the part that contains nerve fibres—in those classed as being web addicts, compared with non-addicts. Furthermore, the study says, "We provided evidences demonstrating the multiple structural changes of the brain in IAD subjects. VBM results indicated the decreased gray matter volume in the bilateral dorsolateral prefrontal cortex (DLPFC), the supplementary motor area (SMA), the orbitofrontal cortex (OFC), the cerebellum and the left rostral ACC (rACC)." [ 13 ]
UCLA professor of psychiatry Gary Small studied brain activity in experienced web surfers versus casual web surfers. He used MRI scans on both groups to evaluate brain activity. The study showed that when Internet surfing, the brain activity of the experienced Internet users was far more extensive than that of the novices, particularly in areas of the prefrontal cortex associated with problem-solving and decision making. However, the two groups had no significant differences in brain activity when reading blocks of text. This evidence suggested that the distinctive neural pathways of experienced Web users had developed because of their Web use. Dr. Small concluded that "The current explosion of digital technology not only is changing the way we live and communicate, but is rapidly and profoundly altering our brains." [ 14 ]
In an August 2008 article in The Atlantic (" Is Google Making Us Stupid? "), Nicholas Carr experientially asserts that using the Internet can lead to lower attention span and make it more difficult to read in the traditional sense (that is, read a book at length without mental interruptions). He says that he and his friends have found it more difficult to concentrate and read whole books, even though they read a great deal when they were younger (that is, when they did not have access to the Internet). [ 15 ] This assertion is based on anecdotal evidence, not controlled research.
Researchers from the University College London have done a 5-year study on Internet habits, and have found that people using the sites exhibited "a form of skimming activity," hopping from one source to another and rarely returning to any source they'd already visited. The 2008 report says, "It is clear that users are not reading online in the traditional sense; indeed there are signs that new forms of "reading" are emerging as users "power browse" horizontally through titles, contents pages and abstracts going for quick wins. It almost seems that they go online to avoid reading in the traditional sense." [ 16 ]
Research suggests that using the Internet helps boost brain power for middle-aged and older people [ 17 ] (research on younger people has not been done). The study compares brain activity when the subjects were reading and when the subjects were surfing the Internet. It found that Internet surfing uses much more brain activity than reading does. Lead researcher Professor Gary Small said: "The study results are encouraging, that emerging computerized technologies may have physiological effects and potential benefits for middle-aged and older adults. [ 18 ] Internet searching engages complicated brain activity, which may help exercise and improve brain function." [ 19 ]
One of the most widely debated effects of social networking has been its influence on productivity. In many schools and workplaces, social media sites are blocked because employers believe their employees will be distracted and unfocused on the sites. It seems, at least from one study, that employers do, indeed, have reason to be concerned. A survey from Hearst Communications found that productivity levels of people that used social networking sites were 1.5% lower than those that did not. [ 20 ] Logically, people cannot get work done when they are performing other tasks. If the employees suffer from degrading self-control, it will be even harder for them to get back to work and maintain productivity.
Evgeny Morozov has said that social networking could be potentially harmful to people. He writes that they can destroy privacy, and notes that "Insurance companies have accessed their patients' Facebook accounts to try to disprove they have hard-to-verify health problems like depression; employers have checked social networking sites to vet future employees; university authorities have searched the web for photos of their students' drinking or smoking pot ." He also said that the Internet also makes people more complacent and risk averse. He said that because much of the ubiquity of modern technology—cameras, recorders, and such—people may not want to act in unusual ways for fear of getting a bad name. People can see pictures and videos of you on the Internet, and this may make you act differently. [ 21 ]
According to the New York Times , many scientists say that "people's ability to focus is being undermined by bursts of information". [ 22 ]
From 53,573 page views taken from various users, 17% of the views lasted less than 4 seconds while 4% lasted more than 10 minutes. In regards to page content, users will only read 49% of a site that contains 111 words or fewer while users will opt to read 28% of an average website (approximately 593 words). For each additional 100 words on a site, users will spend 4.4 seconds longer on the site. [ 23 ]
It is found that those who read articles online go through the article more thoroughly than those who read from print-based materials. Upon choosing their reading material, online readers read 77% of the content, which can be compared to broadsheet newspaper where the corresponding number is 62%. [ 24 ]
Interacting on the Internet mostly does not involve "physical" interactions with another person (i.e. face-to-face conversation), and therefore easily leads to a person feeling free to act differently online, as well as unrestraint in civility and minimization of authority, etc.
People who are socially anxious are more likely to use electronic communication as their only means of communication. This, in turn, makes them more likely to disclose personal information to strangers online that they normally wouldn't give out face-to-face. [ 25 ] The phenomenon is a likely cause for the prevalence of cyberbullying , especially for children who do not understand "social networking etiquette."
Internet anonymity can lead to online disinhibition , in which people do and say things online that they normally wouldn't do or say in person. Psychology researcher John Suler differentiates between benign disinhibition in which people can grow psychologically by revealing secret emotions, fears, and wishes and showing unusual acts of kindness and generosity and toxic disinhibition , in which people use rude language, harsh criticisms, anger, hatred and threats or visit pornographic or violent sites that they wouldn't in the 'real world.' [ 26 ]
People become addicted or dependent on the Internet through excessive computer use that interferes with daily life. Kimberly S. Young [ 27 ] links internet addiction disorder with existing mental health issues, most commonly depression. Young states that the disorder has significant effects socially, psychologically and occupationally.
"Aric Sigman's presentation to members of the Royal College of Paediatrics and Child Health outlined the parallels between screen dependency and alcohol and drug addiction: the instant stimulation provided by all those flickering graphics leads to the release of dopamine , a chemical that's central to the brain's reward system". [ 28 ]
A 2009 study suggested that brain structural changes were present in those classified by the researchers as Internet addicted, similar to those classified as chemically addicted. [ 29 ]
In one study, the researchers selected seventeen subjects with online gaming addiction and another seventeen naive internet users who rarely used the internet. Using a magnetic resonance imaging scanner, they performed a scan to "acquire 3-dimensional T1-weighted images" of the subject's brain. The results of the scan revealed that online gaming addiction "impairs gray and white matter integrity in the orbitofrontal cortex of the prefrontal regions of the brain". [ 30 ] According to Keath Low, psychotherapist, the orbitofrontal cortex "has a major impact on our ability to perform such tasks as planning, prioritizing, paying attention to and remembering details, and controlling our attention". [ 31 ] As a result, Keith Low believes that these online gaming addicts are incapable of prioritizing their life or setting a goal and accomplishing it because of the impairment of their orbitofrontal cortex.
Ease of access to the Internet can increase escapism in which a user uses the Internet as an "escape" from the perceived unpleasant or banal aspects of daily / real life . [ 32 ] Because the internet and virtual realities easily satisfy social needs and drives, according to Jim Blascovich and Jeremy Bailensen, "sometimes [they are] so satisfying that addicted users will withdraw physically from society." Stanford psychiatrist Dr. Elias Aboujaoude believes that advances in virtual reality and immersive 3-D have led us to "where we can have a 'full life' [online] that can be quite removed from our own." Eventually, virtual reality may drastically change a person's social and emotional needs. "We may stop 'needing' or craving real social interactions because they may become foreign to us," Aboujaoude says. [ 33 ]
Psychological distress has been found to influence and increase escapism. Escapism, in turn, increases the likelihood of internet addiction , compulsive internet use, gaming addiction , and further harmful consequences. [ 34 ] [ 35 ]
Internet has its impact on all age groups from elders to children. According to the article 'Digital power: exploring the effects of social media on children's spirituality', children consider the Internet as their third place after home and school. [ 36 ]
One of the main effects social media has had on children is the effect of cyber bullying. A study carried out by 177 students in Canada found that "15% of the students admitted that they cyberbullied others" while "40% of the cyber victims had no idea who the bullies were". [ 37 ] The psychological harm cyber bullying can cause is reflected in low self-esteem, depression and anxiety. It also opens up avenues for manipulation and control. Cyber bullying has ultimately led to depression, anxiety and in severe cases suicide. Suicide is the third leading cause of death for youth between the ages of 10 and 24. Cyber bullying is rapidly increasing. Some writers have suggested monitoring and educating children from a young age about the risks associated with cyber bullying. [ 38 ]
Children use, on average, 27 hours of internet a week and it is on the increase. This leads to an increased risk of insomnia. [ 39 ]
Screen time is affecting children in many ways, not only are children at an increased risk of insomnia but they are also at risk of having eye and health developing problems. A study done in 2018 showed that young children are experiencing Computer Vision Syndrome, also referred to as Digital Eye Strain symptoms which include blurred or double vision, headaches, eye fatigue, and more. Many kids are having to wear glasses at a younger age due to excessive amount of screentime. Health problems are also a big effect of the internet. [ 40 ] The National Longitudinal Study of Adolescent Health did a study on adolescents ranging from 7-12 grade and they found that more screen time increases the risk of obesity. Reducing the amount of time children spend on the internet can prevent getting diseases like obesity and diabetes. [ 41 ]
"A psychologist, Aric Sigman, warned of the perils of "passive parenting " and "benign neglect " caused by parent's reliance on gadgets". [ 28 ] In some cases, parents' internet addictions can have drastic effects on their children. In 2009, a three-year-old girl from New Mexico died of malnutrition and dehydration on the same day that her mother was said to have spent 15 hours playing World of Warcraft online. [ 33 ] In another case in 2014, a Korean couple became so immersed in a video game that allowed them to raise a virtual child online that they let their real baby die. [ 42 ] The effects of the Internet on parenting can be observed in a how parents utilize the Internet, the response to their child's Internet consumption, as well as the effects and influences that the Internet has on the relationship between parent and child.
Overall, parents are seen to do simple tasks such as sending e-mails and keep up with current events whereas social networking sites are less frequented. In regards to researching parental material, a study conducted in January 2012 by the University of Minnesota found that 75% of questioned parents have stated that the Internet improves their method of obtaining parenting related information, 19.7% found parenting websites too complex to navigate, and 13.1% of the group did not find any useful parenting information on any website. [ 43 ]
Many studies have shown that parents view the Internet as a hub of information especially in their children's education. [ 44 ] They feel that it is a valuable commodity that can enhance their learning experience and when used in this manner it does not contribute to any family tension or conflicts. However, when the Internet is used as a social medium (either online gaming or social networking sites) there is a positive correlation between the use of the Internet and family conflicts. In conjunction with using the Internet for social means, there is a risk of exposing familial information to strangers, which is perceived to parents as a threat and can ultimately weaken family boundaries.
A report released in October 2012 by Ofcom focused on the amount of online consumption done by children aged 5–15 and how the parents react to their child's consumption. Of the parents interviewed, 85% use a form of online mediation ranging from face-to-face talks with their children about online surfing to cellphone browser filters. The remaining 15% of parents do not take active measures to adequately inform their children of safe Internet browsing; these parents have either spoken only briefly to their children about cautious surfing or do not do anything at all.
Parents are active in monitoring their child's online use by using methods such as investigating the browsing history and by regulating Internet usage. However, since parents are less versed in Internet usage than their children they are more concerned with the Internet interfering with family life than online matters such as child grooming or cyber-bullying .
When addressing those with lack of parental control over the Internet, parents state that their child is rarely alone (defined for children from 5–11 years old) or that they trust their children when they are online (for children 12–15 years old). Approximately 80% of parents ensure that their child has been taught Internet safety from school and 70% of parents feel that the benefits of using the Internet are greater than the risks that come along with it. [ 45 ]
Conversely an American study, conducted by PewInternet released on 20 November 2012, reveal that parents are highly concerned about the problems the Internet can impose on their teenage children. 47% of parents are tend to worry about their children being exposed to inappropriate material on the Internet and 45% of the parents are concerned about their children's behaviour towards each other both online offline. Only 31% of parents showed concern about the Internet taking away social time from the family. [ 46 ]
Researcher Sanford Grossbart and others explores the relationship between the mother and child and how Internet use affects this relationship. This study forms its basis around Marvin Sussman and Suzanne Steinmetz's idea that the relationship between parent and child is highly influenced by the changing experiences and events of each generation. [ 47 ] "Parental warmth" is a factor in how receptive a parent is to being taught the nuances of the Internet by their child versus the traditional method of the parent influencing the child. If the parent displayed "warm" tendencies she was more open to learning how to use the Internet from their child even if the parent happened to be more knowledgeable on the subject. This fosters teaching in a positive environment, which sustains a strong relationship between mother and child, encourages education, and promotes mature behaviour. "Cooler" mothers only allowed themselves to be taught if they thought that their child held the same amount of knowledge or greater and would dismiss the teaching otherwise suggesting a relationship that stems from the majority of influence coming from the parent. [ 48 ]
However, despite warm and cool parenting methods, parents who encounter a language barrier rely more heavily on their children to utilize the Internet. Vikki Katz of Rutgers University has studied the interaction between immigrant parents and children and how they use technology. Katz notes that the majority resources that immigrants find helpful are located online, however the search algorithms currently in place do not direct languages other than English appropriately. Because of this shortcoming, parents strongly encourage their bilingual children to bridge the gap between the Internet and language. [ 49 ]
The Internet is increasingly being used as a virtual babysitter when parents actively download applications specifically for their children with intentions to keep them calm. A survey conducted by Ipsos has found that half of the interviewed parents believe children ages 8–13 are old enough to own or carry smartphones thus increasing online content consumption in younger generations. [ 50 ] | https://en.wikipedia.org/wiki/Psychological_effects_of_Internet_use |
Psychologism is a family of philosophical positions, according to which certain psychological facts, laws, or entities play a central role in grounding or explaining certain non-psychological facts, laws, or entities. The word was coined by Johann Eduard Erdmann as Psychologismus , being translated into English as psychologism . [ 1 ] [ 2 ]
The Oxford English Dictionary defines psychologism as: "The view or doctrine that a theory of psychology or ideas forms the basis of an account of metaphysics, epistemology, or meaning; (sometimes) spec. the explanation or derivation of mathematical or logical laws in terms of psychological facts." [ 3 ] Psychologism in epistemology , the idea that its problems "can be solved satisfactorily by the psychological study of the development of mental processes", was argued in John Locke 's An Essay Concerning Human Understanding (1690). [ 4 ]
Other forms of psychologism are logical psychologism and mathematical psychologism. Logical psychologism is a position in logic (or the philosophy of logic ) according to which logical laws and mathematical laws are grounded in, derived from, explained or exhausted by psychological facts or laws. Psychologism in the philosophy of mathematics is the position that mathematical concepts and/or truths are grounded in, derived from or explained by psychological facts or laws. [ 5 ]
John Stuart Mill was accused by Edmund Husserl of being an advocate of a type of logical psychologism, although this may not have been the case. [ 6 ] So were many nineteenth-century German philosophers such as Christoph von Sigwart , Benno Erdmann , Theodor Lipps , Gerardus Heymans , Wilhelm Jerusalem , and Theodor Elsenhans , [ 7 ] as well as a number of psychologists, past and present (e.g., Wilhelm Wundt [ 7 ] and Gustave Le Bon ). [ 8 ]
Psychologism was notably criticized by Gottlob Frege in his anti-psychologistic work The Foundations of Arithmetic , and many of his works and essays, including his review of Husserl's Philosophy of Arithmetic . [ 9 ] Husserl, in the first volume of his Logical Investigations , called "The Prolegomena of Pure Logic", criticized psychologism thoroughly and sought to distance himself from it. Frege's arguments were largely ignored, while Husserl's were widely discussed. [ 1 ]
In "Psychologism and Behaviorism", Ned Block describes psychologism in the philosophy of mind as the view that "whether behavior is intelligent behavior depends on the character of the internal information processing that produces it." This is in contrast to a behavioral view which would state that intelligence can be ascribed to a being solely via observing its behavior. This latter type of behavioral view is strongly associated with the Turing test . [ 10 ] [ 11 ] | https://en.wikipedia.org/wiki/Psychologism |
Psychology, philosophy and physiology ( PPP ) was a degree program at the University of Oxford . It was Oxford's
first psychology degree, beginning in 1947, but admitted its last students in October 2010. It has been, in part, replaced by psychology, philosophy, and linguistics (PPL, in which students usually study two of three subjects).
PPP covered the study of thought and behaviour from the differing points of view of psychology , physiology and philosophy . Psychology includes social interaction , learning , child development , mental illness and information processing . Physiology considers the organization of the brain and body of mammals and humans , from the molecular level to the organism as a whole. Philosophy is concerned with ethics , knowledge , the mind , etc.
This article relating to the University of Oxford is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Psychology,_philosophy_and_physiology |
The psychology of reasoning (also known as the cognitive science of reasoning [ 1 ] ) is the study of how people reason , often broadly defined as the process of drawing conclusions to inform how people solve problems and make decisions . [ 2 ] It overlaps with psychology , philosophy , linguistics , cognitive science , artificial intelligence , logic , and probability theory .
Psychological experiments on how humans and other animals reason have been carried out for over 100 years. An enduring question is whether or not people have the capacity to be rational. Current research in this area addresses various questions about reasoning, rationality, judgments, intelligence , relationships between emotion and reasoning, and development.
One of the most obvious areas in which people employ reasoning is with sentences in everyday language. Most experimentation on deduction has been carried out on hypothetical thought, in particular, examining how people reason about conditionals , e.g., If A then B . [ 3 ] Participants in experiments make the modus ponens inference, given the indicative conditional If A then B , and given the premise A , they conclude B . However, given the indicative conditional and the minor premise for the modus tollens inference, not-B , about half of the participants in experiments conclude not-A and the remainder concludes that nothing follows. [ 3 ]
The ease with which people make conditional inferences is affected by context, as demonstrated in the well-known selection task developed by Peter Wason . Participants are better able to test a conditional in an ecologically relevant context , e.g., if the envelope is sealed then it must have a 50 cent stamp on it compared to one that contains symbolic content, e.g., if the letter is a vowel then the number is even . [ 3 ] Background knowledge can also lead to the suppression of even the simple modus ponens inference [ 4 ] Participants given the conditional if Lisa has an essay to write then she studies late in the library and the premise Lisa has an essay to write make the modus ponens inference 'she studies late in the library', but the inference is suppressed when they are also given a second conditional if the library stays open then she studies late in the library . Interpretations of the suppression effect are controversial [ 5 ] [ 6 ]
Other investigations of propositional inference examine how people think about disjunctive alternatives, e.g., A or else B , and how they reason about negation, e.g., It is not the case that A and B . Many experiments have been carried out to examine how people make relational inferences, including comparisons, e.g., A is better than B . Such investigations also concern spatial inferences, e.g. A is in front of B and temporal inferences, e.g. A occurs before B . [ 7 ] Other common tasks include categorical syllogisms , used to examine how people reason about quantifiers such as All or Some , e.g., Some of the A are not B . [ 8 ] [ 9 ] For example if all A are B and some B are C, what (if anything) follows?
There are several alternative theories of the cognitive processes that human reasoning is based on. [ 10 ] One view is that people rely on a mental logic consisting of formal (abstract or syntactic) inference rules similar to those developed by logicians in the propositional calculus . [ 11 ] Another view is that people rely on domain-specific or content-sensitive rules of inference. [ 12 ] A third view is that people rely on mental models , that is, mental representations that correspond to imagined possibilities. [ 13 ] A fourth view is that people compute probabilities. [ 14 ] [ 15 ]
One controversial theoretical issue is the identification of an appropriate competence model, or a standard against which to compare human reasoning. Initially classical logic was chosen as a competence model. [ 16 ] [ 17 ] Subsequently, some researchers opted for non-monotonic logic [ 18 ] [ 19 ] and Bayesian probability . [ 14 ] [ 15 ] Research on mental models and reasoning has led to the suggestion that people are rational in principle but err in practice. [ 7 ] [ 8 ] Connectionist approaches towards reasoning have also been proposed. [ 20 ] Despite the ongoing debate about the cognitive processes involved in human reasoning, recent research has shown that multiple approaches can be useful in modeling human thinking. For instance, studies have found that people's reasoning is often influenced by their prior beliefs, which can be modeled using Bayesian probability theory. [ 21 ] Additionally, research on mental models has shown that people tend to reason about problems by constructing multiple mental representations of the situation, which can help them to identify relevant features and make inferences based on their understanding of the problem. Moreover, connectionist approaches to reasoning have also gained attention, which focus on the neural network models that can learn from data and generalize to new situations.
It is an active question in psychology how, why, and when the ability to reason develops from infancy to adulthood. [ 22 ] Jean Piaget 's theory of cognitive development [ 23 ] posited general mechanisms and stages in the development of reasoning from infancy to adulthood. According to the neo-Piagetian theories of cognitive development , changes in reasoning with development come from increasing working memory capacity, increasing speed of processing , and enhanced executive functions and control. Increasing self-awareness is also an important factor. [ 24 ]
In their book The Enigma of Reason , the cognitive scientists Hugo Mercier and Dan Sperber put forward an "argumentative" theory of reasoning, claiming that humans evolved to reason primarily to justify our beliefs and actions and to convince others in a social environment. [ 25 ] Key evidence for their theory includes the errors in reasoning that solitary individuals are prone to when their arguments are not criticized, such as logical fallacies , and how groups become much better at performing cognitive reasoning tasks when they communicate with one another and can evaluate each other's arguments. Sperber and Mercier offer one attempt to resolve the apparent paradox that the confirmation bias is so strong despite the function of reasoning naively appearing to be to come to veridical conclusions about the world.
The study of the development of reasoning abilities is an ongoing area of research in psychology, and multiple factors have been proposed to explain how, why, and when reasoning develops from infancy to adulthood. Recent research has suggested that early experiences and social interactions play a critical role in the development of reasoning abilities. [ 26 ] For example, studies have shown that infants as young as six months old can engage in basic logical reasoning, such as reasoning about the relationship between objects and their properties. Furthermore, research has highlighted the importance of parental interaction and cognitive stimulation in the development of children's reasoning abilities. Additionally, studies have suggested that cultural factors, such as educational practices and the emphasis on critical thinking, can also influence the development of reasoning skills across different populations.
Philip Johnson-Laird trying to taxonomize thought, distinguished between goal-directed thinking and thinking without goal, noting that association was involved in unrelated reading. He argues that goal directed reasoning can be classified based on the problem space involved in a solution, citing Allen Newell and Herbert A. Simon . [ 27 ] : 454
Inductive reasoning makes broad generalizations from specific cases or observations. In this process of reasoning, general assertions are made based on past specific pieces of evidence. This kind of reasoning allows the conclusion to be false even if the original statement is true. [ 28 ] For example, if one observes a college athlete, one makes predictions and assumptions about other college athletes based on that one observation. Scientists use inductive reasoning to create theories and hypotheses. [ 29 ] Philip Johnson-Laird distinguished inductive from deductive reasoning, in that the former creates semantic information while the later does not . [ 27 ] : 439
In opposition, deductive reasoning is a basic form of valid reasoning. [ 29 ] In this reasoning process a person starts with a known claim or a general belief and from there asks what follows from these foundations or how will these premises influence other beliefs. [ 28 ] In other words, deduction starts with a hypothesis and examines the possibilities to reach a conclusion. [ 29 ] Deduction helps people understand why their predictions are wrong and indicates that their prior knowledge or beliefs are off track. An example of deduction can be seen in the scientific method when testing hypotheses and theories. Although the conclusion usually corresponds and therefore proves the hypothesis, there are some cases where the conclusion is logical, but the generalization is not. For example, the argument, "All young girls wear skirts; Julie is a young girl; therefore, Julie wears skirts" is valid logically, but is not sound because the first premise isn't true.
The syllogism is a form of deductive reasoning in which two statements reach a logical conclusion. With this reasoning, one statement could be "Every A is B" and another could be "This C is A". Those two statements could then lead to the conclusion that "This C is B". These types of syllogisms are used to test deductive reasoning to ensure there is a valid hypothesis. [ 29 ] A Syllogistic Reasoning Task was created from a study performed by Morsanyi, Kinga, Handley, and Simon that examined the intuitive contributions to reasoning. They used this test to assess why "syllogistic reasoning performance is based on an interplay between a conscious and effortful evaluation of logicality and an intuitive appreciation of the believability of the conclusions". [ 30 ]
Another form of reasoning is called abductive reasoning . This type is based on creating and testing hypotheses using the best information available. Abductive reasoning produces the kind of daily decision-making that works best with the information present, which often is incomplete. This could involve making educated guesses from observed unexplainable phenomena. This type of reasoning can be seen in the world when doctors make decisions about diagnoses from a set of results or when jurors use the relevant evidence to make decisions about a case. [ 29 ]
Apart from the aforementioned types of reasoning, there is also analogical reasoning, which involves comparing and reasoning about two different situations or concepts to draw conclusions about a third. It can be used to make predictions or solve problems by finding similarities between two domains and transferring knowledge from one to the other. For example, a problem-solving approach that works in one domain may be applied to a new, similar problem in a different domain. Analogical reasoning is particularly useful in scientific discovery and problem-solving tasks, as it can help generate hypotheses, create new theories, and develop innovative solutions. [ 31 ] However, it can also lead to errors if the similarities between domains are too superficial or if the analogy is based on false assumptions.
Judgment and reasoning involve thinking through the options, making a judgment or conclusion and finally making a decision. Making judgments involves heuristics, or efficient strategies that usually lead one to the right answers. [ 28 ] The most common heuristics used are attribute substitution, the availability heuristic , the representativeness heuristic and the anchoring heuristic – these all aid in quick reasoning and work in most situations. Heuristics allow for errors, a price paid to gain efficiency. [ 28 ]
Other errors in judgment, therefore affecting reasoning, include errors in judgment about covariation – a relationship between two variables such that the presence and magnitude of one can predict the presence and magnitude of the other. [ 28 ] One cause of covariation is confirmation bias , or the tendency to be more responsive to evidence that confirms one's own beliefs. But assessing covariation can be pulled off track by neglecting base-rate information – how frequently something occurs in general. [ 28 ] However people often ignore base rates and tend to use other information presented.
There are more sophisticated judgment strategies that result in fewer errors. People often reason based on availability but sometimes they look for other, more accurate, information to make judgments. [ 32 ] This suggests there are two ways of thinking, known as the Dual-Process Model . [ 33 ] The first, System I, is fast, automatic and uses heuristics – more of intuition. The second, System II, is slower, effortful and more likely to be correct – more reasoning. [ 28 ]
The inferences people draw are related to factors such as linguistic pragmatics and emotion . [ 34 ] [ 35 ]
Decision making is often influenced by the emotion of regret and by the presence of risk. When people are presented with options, they tend to select the one that they think they will regret the least. [ 36 ] In decisions that involve a large amount of risk, people tend to ask themselves how much dread they would experience were a worst-case scenario to occur, e.g. a nuclear accident, and then use that dread as an indicator of the level of risk. [ 37 ]
Antonio Damasio suggests that somatic markers , certain memories that can cause a strong bodily reaction, act as a way to guide decision making as well. For example, when a person is remembering a scary movie and once again becomes tense, their palms might begin to sweat. Damasio argues that when making a decision people rely on their "gut feelings" to assess various options, and this makes them decide to go with a decision that is more positive and stay away from those that are negative. [ 38 ] He also argues that the orbitofrontal cortex – located at the base of the frontal lobe, just above the eyes – is crucial in the use of somatic markers, because it is the part in the brain that allows people to interpret emotion.
When emotion shapes decisions, the influence is usually based on predictions of the future. When people ask themselves how they would react, they are making inferences about the future. Researchers suggest affective forecasting , the ability to predict one's own emotions, is poor because people tend to overestimate how much they will regret their errors. [ 39 ]
Another factor that can influence decision making is linguistic pragmatics, which refers to the use of language in social contexts. Language can be used to convey different levels of politeness, power, and intention, which can all affect how people interpret and respond to messages. For example, if a boss asks an employee to complete a task using a commanding tone, the employee may feel more pressured to complete the task quickly, compared to if the boss asked in a polite tone. Similarly, if someone uses sarcasm or irony, it can be difficult for the listener to discern their true meaning, leading to misinterpretation and potentially poor decision making. [ 40 ] In addition to linguistic pragmatics, cultural and social factors can also play a role in decision making. Different cultures may have different norms and values, which can influence how people approach decisions. For example, in collectivistic cultures, decisions may be made based on what is best for the group, whereas in individualistic cultures, decisions may prioritize individual needs and desires. Overall, decision making is a complex process that involves many factors, including emotion, risk, pragmatics, and cultural background. By understanding these factors, individuals can make more informed decisions and better navigate the complexities of the world around them.
Studying reasoning neuroscientifically involves determining the neural correlates of reasoning, often investigated using event-related potentials and functional magnetic resonance imaging . [ 41 ] In fMRI studies, participants are presented with variations of tasks to determine the different cognitive processes required. This is done by cross-referencing where in the brain there is more or less activation (as indexed by the blood-oxygen-level-dependent signal ) on the different conditions with what other studies found for those regions. For example, if a condition leads to more activation of the hippocampus, then this may be interpreted as being related to memory retrieval—particularly if the theoretical framing of the task suggests that this is necessary. [ 42 ] | https://en.wikipedia.org/wiki/Psychology_of_reasoning |
Psychoneuroimmunology ( PNI ), also referred to as psychoendoneuroimmunology ( PENI ) or psychoneuroendocrinoimmunology ( PNEI ), is the study of the interaction between psychological processes and the nervous and immune systems of the human body. [ 1 ] [ 2 ] It is a subfield of psychosomatic medicine . PNI takes an interdisciplinary approach, incorporating psychology , neuroscience , immunology , physiology , genetics , pharmacology , molecular biology , psychiatry , behavioral medicine , infectious diseases , endocrinology , and rheumatology .
The main interests of PNI are the interactions between the nervous and immune systems and the relationships between mental processes and health . [ 3 ] PNI studies, among other things, the physiological functioning of the neuroimmune system in health and disease; disorders of the neuroimmune system ( autoimmune diseases ; hypersensitivities ; immune deficiency ); and the physical, chemical and physiological characteristics of the components of the neuroimmune system in vitro , in situ , and in vivo .
Mid-20th century studies of psychiatric patients reported immune alterations in psychotic individuals, including lower numbers of lymphocytes [ 4 ] [ 5 ] and poorer antibody response to pertussis vaccination , compared with nonpsychiatric control subjects. [ 6 ] In 1964, George F. Solomon, from the University of California in Los Angeles , and his research team coined the term "psychoimmunology" and published a landmark paper: "Emotions, immunity, and disease: a speculative theoretical integration." [ 7 ]
In 1975, Robert Ader and Nicholas Cohen , at the University of Rochester , advanced PNI with their demonstration of classic conditioning of immune function, and they subsequently coined the term "psychoneuroimmunology". [ 8 ] [ 9 ] Ader was investigating how long conditioned responses (in the sense of Pavlov 's conditioning of dogs to drool when they heard a bell ring) might last in laboratory rats. To condition the rats, he used a combination [ clarification needed ] of saccharin -laced water (the conditioned stimulus) and the drug Cytoxan , which unconditionally induces nausea and taste aversion and suppression of immune function. Ader was surprised to discover that after conditioning, just feeding the rats saccharin-laced water was associated with the death of some animals and he proposed that they had been immunosuppressed after receiving the conditioned stimulus. Ader (a psychologist) and Cohen (an immunologist) directly tested this hypothesis by deliberately immunizing conditioned and unconditioned animals, exposing these and other control groups to the conditioned taste stimulus, and then measuring the amount of antibody produced. The highly reproducible results revealed that conditioned rats exposed to the conditioned stimulus were indeed immunosuppressed. In other words, a signal via the nervous system (taste) was affecting immune function. This was one of the first scientific experiments that demonstrated that the nervous system can affect the immune system.
In the 1970s, Hugo Besedovsky , Adriana del Rey and Ernst Sorkin , working in Switzerland, reported multi-directional immune-neuro-endocrine interactions, since they show that not only the brain can influence immune processes but also the immune response itself can affect the brain and neuroendocrine mechanisms. They found that the immune responses to innocuous antigens triggers an increase in the activity of hypothalamic neurons [ 10 ] [ 11 ] and hormonal and autonomic nerve responses that are relevant for immunoregulation and are integrated at brain levels (see review [ 12 ] ). On these bases, they proposed that the immune system acts as a sensorial receptor organ that, besides its peripheral effects, can communicate to the brain and associated neuro-endocrine structures its state of activity. [ 11 ] These investigators also identified products from immune cells, later characterized as cytokines, that mediate this immune-brain communication [ 13 ] (more references in [ 12 ] ).
In 1981, David L. Felten , then working at the Indiana University School of Medicine , and his colleague JM Williams, discovered a network of nerves leading to blood vessels as well as cells of the immune system. The researchers also found nerves in the thymus and spleen terminating near clusters of lymphocytes , macrophages , and mast cells , all of which help control immune function. This discovery provided one of the first indications of how neuro-immune interaction occurs.
Ader, Cohen, and Felten went on to edit the groundbreaking book Psychoneuroimmunology in 1981, which laid out the underlying premise that the brain and immune system represent a single, integrated system of defense.
In 1985, research by neuropharmacologist Candace Pert , of the National Institutes of Health at Georgetown University , revealed that neuropeptide -specific receptors are present on the cell walls of both the brain and the immune system. [ 14 ] [ 15 ] The discovery that neuropeptides and neurotransmitters act directly upon the immune system shows their close association with emotions and suggests mechanisms through which emotions, from the limbic system , and immunology are deeply interdependent. Showing that the immune and endocrine systems are modulated not only by the brain but also by the central nervous system itself affected the understanding of emotions, as well as disease.
Contemporary advances in psychiatry , immunology, neurology , and other integrated disciplines of medicine has fostered enormous growth for PNI. The mechanisms underlying behaviorally induced alterations of immune function, and immune alterations inducing behavioral changes, are likely to have clinical and therapeutic implications that will not be fully appreciated until more is known about the extent of these interrelationships in normal and pathophysiological states.
PNI research looks for the exact mechanisms by which specific neuroimmune effects are achieved. Evidence for nervous-immunological interactions exist at multiple biological levels.
The immune system and the brain communicate through signaling pathways. The brain and the immune system are the two major adaptive systems of the body. Two major pathways are involved in this cross-talk: the Hypothalamic-pituitary-adrenal axis (HPA axis), and the sympathetic nervous system (SNS), via the sympathetic-adrenal-medullary axis (SAM axis). The activation of SNS during an immune response might be aimed to localize the inflammatory response.
The body's primary stress management system is the HPA axis. The HPA axis responds to physical and mental challenge to maintain homeostasis in part by controlling the body's cortisol level. Dysregulation of the HPA axis is implicated in numerous stress-related diseases, with evidence from meta-analyses indicating that different types/duration of stressors and unique personal variables can shape the HPA response. [ 16 ]
The major hormones involved in the HPA axis are listed below, along with their functions.
Molecules called pro-inflammatory cytokines, which include interleukin-1 (IL-1), Interleukin-2 (IL-2), interleukin-6 (IL-6), Interleukin-12 (IL-12), Interferon-gamma (IFN-Gamma) and tumor necrosis factor alpha (TNF-alpha) can affect brain growth as well as neuronal function. Circulating immune cells such as macrophages , as well as glial cells ( microglia and astrocytes ) secrete these molecules. Cytokine regulation of hypothalamic function is an active area of research for the treatment of anxiety-related disorders. [ 25 ]
HPA axis activity and cytokines are intrinsically intertwined: inflammatory cytokines stimulate adrenocorticotropic hormone (ACTH) and cortisol secretion, while, in turn, glucocorticoids suppress the synthesis of pro-inflammatory cytokines. Cytokines mediate and control immune and inflammatory responses. Complex interactions exist between cytokines, inflammation and the adaptive responses in maintaining homeostasis . Like the stress response, the inflammatory reaction is crucial for survival. Systemic inflammatory reaction results in stimulation of four major programs: [ 26 ]
These are mediated by the HPA axis and the SNS. Common human diseases such as allergy , autoimmunity, chronic infections and sepsis are characterized by a dysregulation of the pro-inflammatory versus anti-inflammatory and T helper (Th1) versus (Th2) cytokine balance. [ 30 ] Recent studies show pro-inflammatory cytokine processes take place during depression , mania and bipolar disease, in addition to autoimmune hypersensitivity and chronic infections. [ 31 ]
Chronic secretion of stress hormones , glucocorticoids (GCs) and catecholamines (CAs), as a result of disease, may reduce the effect of neurotransmitters , including serotonin , norepinephrine and dopamine , or other receptors in the brain, thereby leading to the dysregulation of neurohormones. [ 32 ] Under stimulation, norepinephrine is released from the sympathetic nerve terminals in organs, and the target immune cells express adrenoreceptors . Through stimulation of these receptors, locally released norepinephrine, or circulating catecholamines such as epinephrine , affect lymphocyte traffic, circulation, and proliferation, and modulate cytokine production and the functional activity of different lymphoid cells.
Glucocorticoids also inhibit the further secretion of corticotropin-releasing hormone from the hypothalamus and ACTH from the pituitary ( negative feedback ). Under certain conditions stress hormones may facilitate inflammation through induction of signaling pathways and through activation of the corticotropin-releasing hormone.
These abnormalities and the failure of the adaptive systems to resolve inflammation affect the well-being of the individual, including behavioral parameters, quality of life and sleep, as well as indices of metabolic and cardiovascular health, developing into a "systemic anti-inflammatory feedback" and/or "hyperactivity" of the local pro-inflammatory factors which may contribute to the pathogenesis of disease.
This systemic or neuro-inflammation and neuroimmune activation have been shown to play a role in the etiology of a variety of neurodegenerative disorders such as Parkinson's and Alzheimer's disease , multiple sclerosis , pain, and AIDS -associated dementia. However, cytokines and chemokines also modulate central nervous system (CNS) function in the absence of overt immunological, physiological, or psychological challenges. [ 33 ]
There are now sufficient data to conclude that immune modulation by psycho social stressors and/or interventions can lead to actual health changes. Recent data has proven that psychosocial interventions are associated with decreased proinflammatory cytokines and increased immune cell counts. These results were further supported when combined with cognitive behavioral therapy (CBT) treatment. [ 34 ] Although changes related to infectious disease and wound healing have provided the strongest evidence to date, the clinical importance of immunological dysregulation is highlighted by increased risks across diverse conditions and diseases. For example, stressors can produce profound health consequences. In one epidemiological study, all-cause mortality increased in the month following a severe stressor – the death of a spouse. [ 35 ] Theorists propose that stressful events trigger cognitive and affective responses which, in turn, induce sympathetic nervous system and endocrine changes, and these ultimately impair immune function. [ 36 ] [ 37 ] Potential health consequences are broad, but include rates of infection [ 38 ] [ 39 ] HIV progression [ 40 ] [ 41 ] cancer incidence and progression, [ 35 ] [ 42 ] [ 43 ] and high rates of infant mortality. [ 44 ] [ 45 ]
One study reviewed patients with PTSD and revealed that DHEA, which is a precursor for synthesis of molecules released from the adrenal cortex in response to stress, has antiglucocorticoid properties. When under stress, blood DHEA levels rapidly increased. A higher DHEA-to-cortisol ratio is correlated with resilience while a lower DHEA-to-cortisol ratio is associated with more severe forms of PTSD. [ 46 ]
Stress is thought to affect immune function through emotional and/or behavioral manifestations such as anxiety , fear , tension , anger and sadness and physiological changes such as heart rate , blood pressure , and sweating . Researchers have suggested that these changes are beneficial if they are of limited duration, [ 36 ] but when stress is chronic, the system is unable to maintain equilibrium or homeostasis ; the body remains in a state of arousal, where digestion is slower to reactivate or does not reactivate properly, often resulting in indigestion. Furthermore, blood pressure stays at higher levels. [ 47 ]
In one of the earlier PNI studies, which was published in 1960, subjects were led to believe that they had accidentally caused serious injury to a companion through misuse of explosives. [ 48 ] Since then decades of research resulted in two large meta-analyses, which showed consistent immune dysregulation in healthy people who are experiencing stress.
In the first meta-analysis by Herbert and Cohen in 1993, [ 49 ] they examined 38 studies of stressful events and immune function in healthy adults. They included studies of acute laboratory stressors (e.g. a speech task), short-term naturalistic stressors (e.g. medical examinations), and long-term naturalistic stressors (e.g. divorce, bereavement, caregiving, unemployment). They found consistent stress-related increases in numbers of total white blood cells , as well as decreases in the numbers of helper T cells , suppressor T cells , and cytotoxic T cells , B cells , and natural killer cells (NK). They also reported stress-related decreases in NK and T cell function, and T cell proliferative responses to phytohaemagglutinin [PHA] and concanavalin A [Con A]. These effects were consistent for short-term and long-term naturalistic stressors, but not laboratory stressors.
In the second meta-analysis by Zorrilla et al. in 2001, [ 50 ] they replicated Herbert and Cohen's meta-analysis. Using the same study selection procedures, they analyzed 75 studies of stressors and human immunity. Naturalistic stressors were associated with increases in number of circulating neutrophils , decreases in number and percentages of total T cells and helper T cells, and decreases in percentages of natural killer cell (NK) cells and cytotoxic T cell lymphocytes. They also replicated Herbert and Cohen's finding of stress-related decreases in NKCC and T cell mitogen proliferation to phytohaemagglutinin (PHA) and concanavalin A (Con A).
A study done by the American Psychological Association did an experiment on rats, where they applied electrical shocks to a rat, and saw how interleukin-1 was released directly into the brain. Interleukin-1 is the same cytokine released when a macrophage chews on a bacterium , which then travels up the vagus nerve , creating a state of heightened immune activity, and behavioral changes. [ 51 ]
More recently, there has been increasing interest in the links between interpersonal stressors and immune function. For example, marital conflict, loneliness, caring for a person with a chronic medical condition, and other forms on interpersonal stress dysregulate immune function. [ 52 ]
Release of corticotropin-releasing hormone (CRH) from the hypothalamus is influenced by stress. [ 58 ]
Furthermore, stressors that enhance the release of CRH suppress the function of the immune system; conversely, stressors that depress CRH release potentiate immunity.
Glutamate agonists , cytokine inhibitors, vanilloid-receptor agonists , catecholamine modulators, ion-channel blockers , anticonvulsants , GABA agonists (including opioids and cannabinoids ), COX inhibitors , acetylcholine modulators , melatonin analogs (such as Ramelton ), adenosine receptor antagonists and several miscellaneous drugs (including biologics like Passiflora edulis ) are being studied for their psychoneuroimmunological effects.
For example, SSRIs , SNRIs and tricyclic antidepressants acting on serotonin , norepinephrine , dopamine and cannabinoid receptors have been shown to be immunomodulatory and anti-inflammatory against pro-inflammatory cytokine processes, specifically on the regulation of IFN-gamma and IL-10, as well as TNF-alpha and IL-6 through a psychoneuroimmunological process. [ 61 ] [ 62 ] [ 63 ] [ 64 ] Antidepressants have also been shown to suppress TH1 upregulation. [ 61 ] [ 62 ] [ 63 ] [ 65 ] [ 66 ]
Tricyclic and dual serotonergic-noradrenergic reuptake inhibition by SNRIs (or SSRI-NRI combinations), have also shown analgesic properties additionally. [ 67 ] [ 68 ] According to recent evidences antidepressants also seem to exert beneficial effects in experimental autoimmune neuritis in rats by decreasing Interferon-beta (IFN-beta) release or augmenting NK activity in depressed patients. [ 69 ]
These studies warrant investigation of antidepressants for use in both psychiatric and non-psychiatric illness and that a psychoneuroimmunological approach may be required for optimal pharmacotherapy in many diseases. [ 70 ] Future antidepressants may be made to specifically target the immune system by either blocking the actions of pro-inflammatory cytokines or increasing the production of anti-inflammatory cytokines. [ 71 ]
The endocannabinoid system appears to play a significant role in the mechanism of action of clinically effective and potential antidepressants and may serve as a target for drug design and discovery. [ 64 ] The endocannabinoid -induced modulation of stress-related behaviors appears to be mediated, at least in part, through the regulation of the serotoninergic system, by which cannabinoid CB 1 receptors modulate the excitability of dorsal raphe serotonin neurons . [ 72 ] Data suggest that the endocannabinoid system in cortical and subcortical structures is differentially altered in an animal model of depression and that the effects of chronic, unpredictable stress (CUS) on CB 1 receptor binding site density are attenuated by antidepressant treatment while those on endocannabinoid content are not.
The increase in amygdalar CB 1 receptor binding following imipramine treatment is consistent with prior studies which collectively demonstrate that several treatments which are beneficial to depression, such as electroconvulsive shock and tricyclic antidepressant treatment, increase CB 1 receptor activity in subcortical limbic structures , such as the hippocampus , amygdala and hypothalamus . And preclinical studies have demonstrated the CB 1 receptor is required for the behavioral effects of noradrenergic based antidepressants but is dispensable for the behavioral effect of serotonergic based antidepressants. [ 73 ] [ 74 ]
Extrapolating from the observations that positive emotional experiences boost the immune system, Roberts speculates that intensely positive emotional experiences—sometimes brought about during mystical experiences occasioned by psychedelic medicines—may boost the immune system powerfully. Research on salivary IgA supports this hypothesis, but experimental testing has not been done. [ 75 ] | https://en.wikipedia.org/wiki/Psychoneuroimmunology |
Psychopathology is the study of mental illness. It includes the signs and symptoms of all mental disorders. The field includes abnormal cognition, maladaptive behavior, and experiences which differ according to social norms. This discipline is an in-depth look into symptoms, behaviors, causes, course, development, categorization, treatments, strategies, and more.
Biological psychopathology is the study of the biological etiology of abnormal cognitions, behaviour and experiences. Child psychopathology is a specialization applied to children and adolescents.
Early explanations for mental illnesses were influenced by religious belief and superstition . Psychological conditions that are now classified as mental disorders were initially attributed to possessions by evil spirits, demons, and the devil. This idea was widely accepted up until the sixteenth and seventeenth centuries. [ 1 ]
The Greek physician Hippocrates was one of the first to reject the idea that mental disorders were the result of possession by demons or the devil, and instead looked to natural causes. He firmly believed the symptoms of mental disorders were due to diseases originating in the brain. Hippocrates suspected that these states of insanity were due to imbalances of fluids in the body. He identified four fluids in particular: blood, black bile, yellow bile, and phlegm. This later became the basis of the chemical imbalance theory used widely today.
Furthermore, not far from Hippocrates, the philosopher Plato would come to argue the mind, body, and spirit worked as a unit. Any imbalance brought to these compositions of the individual could bring distress or lack of harmony within the individual. This philosophical idea would remain in perspective until the seventeenth century. It was later challenged by Laing (1960) along with Laing and Esterson (1964) who noted that it was the family environment that led to the formation of adaptive strategies.
In the eighteenth century's Romantic Movement , the idea that healthy parent-child relationships provided sanity became a prominent idea. Philosopher Jean-Jacques Rousseau introduced the notion that trauma in childhood could have negative implications later in adulthood.
In the 1600s and 1700s insane asylums started to be opened to house those with mental disorders. [ 2 ] Asylums were places where restraint techniques and treatments could be tested on patients who were confined. These were early precursors for psychiatric hospitals.
In 1875 the German book Textbook of Forensic Psychopathology was published, written by Richard von Krafft-Ebing , which became a standard psychiatric textbook for Universities across Germany . [ 3 ]
The scientific discipline of psychopathology was founded by Karl Jaspers in 1913. It was referred to as "static understanding" and its purpose was to graphically recreate the "mental phenomenon" experienced by the client. A few years earlier, in 1899, the German book Lehrbuch der Psychopathologischen Untersuchungs-Methoden was published by Robert Sommer .
Sigmund Freud proposed a method for treating psychopathology through dialogue between a patient and a psychoanalyst. Talking therapy would originate from his ideas on the individual's experiences and the natural human efforts to make sense of the world and life. [ 4 ]
The study of psychopathology is interdisciplinary, with contributions coming from clinical psychology , abnormal psychology , social psychology , and developmental psychology , as well as neuropsychology and other psychology subdisciplines. Other related fields include psychiatry , neuroscience , criminology , social work , sociology , epidemiology , and statistics . [ 5 ]
Psychopathology can be broadly separated into descriptive and explanatory. Descriptive psychopathology involves categorising, defining and understanding symptoms as reported by people and observed through their behaviour which are then assessed according to a social norm. Explanatory psychopathology looks to find explanations for certain kinds of symptoms according to theoretical models such as psychodynamics , cognitive behavioural therapy or through understanding how they have been constructed by drawing upon Constructivist Grounded Theory (Charmaz, 2016) or Interpretative Phenomenological Analysis (Smith, Flowers & Larkin, 2013). [ 6 ]
There are several ways to characterise the presence of psychopathology in an individual as a whole. One strategy is to assess a person along four dimensions: deviance, distress, dysfunction, and danger, known collectively as the four Ds. Another conceptualisation, the p factor, sees psychopathology as a general, overarching construct that influences psychiatric symptoms.
Mental disorders are defined by a set of characteristic features, that is more than just one symptom. In order to be classified for diagnosis, the symptoms cannot represent an expected response to a common stress or loss that is related to an event. Syndromes are a set of simultaneous symptoms that represent a disorder. Common mental health disorders include depression , [ clarification needed ] generalized anxiety disorder (GAD), panic disorder , phobias , social anxiety disorder , obsessive-compulsive disorder (OCD), and post-traumatic stress disorder (PTSD). [ 7 ]
Depression is one of the most common and most debilitating mental disorders worldwide. [ 8 ] It affects how individuals think, feel, and act. Symptoms vary depending on each individual person and include feeling sad, irritable, hopeless, or losing interest in activities once enjoyed.
Generalized anxiety disorder is feeling worried or nervous more frequently than what correlates to real-life stressors. It is more common in women than men and includes symptoms such as having trouble controlling worries, or feelings of nervousness, or feeling restless and difficulty relaxing. [ 9 ]
A description of the four Ds when defining abnormality:
Benjamin Lahey and colleagues first proposed a general "psychopathology factor" in 2012, [ 14 ] or simply "p factor". This construct shares its conceptual similarity with the g factor of general intelligence . Instead of conceptualising psychopathology as consisting of several discrete categories of mental disorders, the p factor is dimensional and influences whether psychiatric symptoms in general are present or absent. The symptoms that are present then combine to form several distinct diagnoses. The p factor is modelled in the Hierarchical Taxonomy of Psychopathology . Although researchers initially conceived a three-factor explanation for psychopathology generally, a subsequent study provided more evidence for a single factor that is sequentially comorbid , recurrent/chronic , and exists on a continuum of severity and chronicity. [ 15 ]
Higher scores on the p factor dimension have been found to be correlated with higher levels of functional impairment, greater incidence of problems in developmental history, and more diminished early-life brain function. In addition, those with higher levels of the p factor are more likely to have inherited a genetic predisposition to mental illness. The existence of the p factor may explain why it has been "... challenging to find causes, consequences, biomarkers, and treatments with specificity to individual mental disorders." [ 15 ]
A 2020 review of the p factor found that many studies support its validity and that it is generally stable throughout one's life. A high p factor is associated with many adverse effects, including poor academic performance, impulsivity, criminality, suicidality, reduced foetal growth, lower executive functioning , and a greater number of psychiatric diagnoses. A partial genetic basis for the p factor has also been supported. [ 16 ]
Alternatively, the p factor has also been interpreted [ by whom? ] as an index of general impairment rather than being a specific index that causes psychopathology. [ 16 ]
The term psychopathology may also be used to denote behaviours or experiences which are indicative of mental illness, even if they do not constitute a formal diagnosis. For example, the presence of hallucinations may be considered as a psychopathological sign, even if there are not enough symptoms present to fulfil the criteria for one of the disorders listed in the DSM or ICD .
In a more general sense, any behaviour or experience which causes impairment, distress or disability , particularly if it is thought to arise from a functional breakdown in either the cognitive or neurocognitive systems in the brain, may be classified as psychopathology. It remains unclear how strong the distinction between maladaptive traits and mental disorders actually is, [ 17 ] [ 18 ] e.g. neuroticism is often described as the personal level of minor psychiatric symptoms. [ 19 ]
Main article: Diagnostic and Statistical Manual of Mental Disorders
The Diagnostic and Statistical Manual of Mental Disorders (DSM) is a guideline for the diagnosis and understanding of mental disorders. The American Psychiatric Association (APA) sponsors the editing, writing, reviewing and publishing of this book. It is a reference book on mental health and brain-related conditions and disorders. It serves as reference for a range of professionals in medicine and mental health in the United States particularly. These professionals include psychologists, counsellors, physicians, social workers, psychiatric nurses and nurse practitioners, marriage and family therapists, and more. The current DSM is the fifth, most recent edition of this book. It was released in May 2013. [ 20 ] Each edition makes significant changes to the classification of disorders.
Main article: Research Domain Criteria
The RDoC framework is a set of research principles for investigating mental disorders. It is meant to create a new approach to mental illness that leads to better diagnosis, prevention, intervention, and cures. It is not necessarily meant to serve as a diagnostic guide or replace the DSM, however, it is meant to examine various degrees of dysfunction. It was developed by the US National Institute of Mental Health (NIMH). [ 21 ] It aims to address heterogeneity by providing a more symptom based framework for understanding mental disorders. It relied on dimensions that span the range from normal to abnormal and allows investigators to work with a larger database. It uses six major functional domains to examine neurobehavioral functioning. Different aspects of each domain are represented by constructs which are studied along the full range of functioning. Together all of the domains form a matrix that could represent research ideas. It is a heuristic, and acknowledges that research topics will change and grow as science emerges. [ 22 ] | https://en.wikipedia.org/wiki/Psychopathology |
Psychotoxicity is a pharmacology term that refers to the effect when a drug interferes seriously with normal behaviour. [ 1 ]
This medical symptom article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Psychotoxicity |
The psychrometric constant γ {\displaystyle \gamma } relates the partial pressure of water in air to the air temperature. This lets one interpolate actual vapor pressure from paired dry and wet thermometer bulb temperature readings. [ 1 ]
Both λ v {\displaystyle \lambda _{v}} and M W r a t i o {\displaystyle MW_{ratio}} are constants. Since atmospheric pressure , P, depends upon altitude, so does γ {\displaystyle \gamma } . At higher altitude water evaporates and boils at lower temperature.
Although ( c p ) H 2 O {\displaystyle \left(c_{p}\right)_{H_{2}O}} is constant, varied air composition results in varied ( c p ) a i r {\displaystyle \left(c_{p}\right)_{air}} .
Thus on average, at a given location or altitude, the psychrometric constant is approximately constant. Still, it is worth remembering that weather impacts both atmospheric pressure and composition.
Saturated vapor pressure, e s = e [ T d e w ] {\displaystyle e_{s}=e\left[T_{dew}\right]} Actual vapor pressure, e a = e s − γ ∗ ( T d r y − T w e t ) {\displaystyle e_{a}=e_{s}-\gamma *\left(T_{dry}-T_{wet}\right)} | https://en.wikipedia.org/wiki/Psychrometric_constant |
Psychrophiles or cryophiles (adj. psychrophilic or cryophilic ) are extremophilic organisms that are capable of growth and reproduction in low temperatures, ranging from −20 °C (−4 °F) [ 2 ] to 20 °C (68 °F). [ 3 ] They are found in places that are permanently cold, such as the polar regions and the deep sea. They can be contrasted with thermophiles , which are organisms that thrive at unusually high temperatures, and mesophiles at intermediate temperatures. Psychrophile is Greek for 'cold-loving', from Ancient Greek ψυχρός ( psukhrós ) ' cold, frozen ' .
Many such organisms are bacteria or archaea , but some eukaryotes such as lichens , snow algae , phytoplankton , fungi, and wingless midges , are also classified as psychrophiles.
The cold environments that psychrophiles inhabit are ubiquitous on Earth, as a large fraction of the planetary surface experiences temperatures lower than 10 °C. They are present in permafrost , polar ice, glaciers , snowfields and deep ocean waters. These organisms can also be found in pockets of sea ice with high salinity content. [ 4 ] Microbial activity has been measured in soils frozen below −39 °C. [ 5 ] In addition to their temperature limit, psychrophiles must also adapt to other extreme environmental constraints that may arise as a result of their habitat. These constraints include high pressure in the deep sea, and high salt concentration on some sea ice. [ 6 ] [ 4 ]
Psychrophiles are protected from freezing and the expansion of ice by ice-induced desiccation and vitrification (glass transition), as long as they cool slowly. Free living cells desiccate and vitrify between −10 °C and −26 °C. Cells of multicellular organisms may vitrify at temperatures below −50 °C. The cells may continue to have some metabolic activity in the extracellular fluid down to these temperatures, and they remain viable once restored to normal temperatures. [ 2 ]
They must also overcome the stiffening of their lipid cell membrane, as this is important for the survival and functionality of these organisms. To accomplish this, psychrophiles adapt lipid membrane structures that have a high content of short, unsaturated fatty acids . Compared to longer saturated fatty acids, incorporating this type of fatty acid allows for the lipid cell membrane to have a lower melting point, which increases the fluidity of the membranes. [ 7 ] [ 8 ] In addition, carotenoids are present in the membrane, which help modulate the fluidity of it. [ 9 ]
Antifreeze proteins are also synthesized to keep psychrophiles' internal space liquid, and to protect their DNA when temperatures drop below water's freezing point. By doing so, the protein prevents any ice formation or recrystallization process from occurring. [ 9 ]
The enzymes of these organisms have been hypothesized to engage in an activity-stability-flexibility relationship as a method for adapting to the cold; the flexibility of their enzyme structure will increase as a way to compensate for the freezing effect of their environment. [ 4 ]
Certain cryophiles, such as Gram-negative bacteria Vibrio and Aeromonas spp., can transition into a viable but nonculturable (VBNC) state. [ 10 ] During VBNC, a micro-organism can respire and use substrates for metabolism – however, it cannot replicate. An advantage of this state is that it is highly reversible. It has been debated whether VBNC is an active survival strategy or if eventually the organism's cells will no longer be able to be revived. [ 11 ] There is proof however it may be very effective – Gram positive bacteria Actinobacteria have been shown to have lived about 500,000 years in the permafrost conditions of Antarctica, Canada, and Siberia. [ 12 ]
Psychrophiles include bacteria, lichens, snow algae, phytoplankton, fungi, and insects.
Among the bacteria that can tolerate extreme cold are Arthrobacter sp., Psychrobacter sp. and members of the genera Halomonas , Pseudomonas , Hyphomonas , and Sphingomonas . [ 13 ] Another example is Chryseobacterium greenlandensis , a psychrophile that was found in 120,000-year-old ice.
Umbilicaria antarctica and Xanthoria elegans are lichens that have been recorded photosynthesizing at temperatures ranging down to −24 °C, and they can grow down to around −10 °C. [ 14 ] [ 1 ] Some multicellular eukaryotes can also be metabolically active at sub-zero temperatures, such as some conifers; [ 15 ] those in the Chironomidae family are still active at −16 °C. [ 16 ]
Microalgae that live in snow and ice include green, brown, and red algae. Snow algae species such as Chloromonas sp. , Chlamydomonas sp. , and Chlorella sp. are found in polar environments. [ 17 ] [ 18 ]
Some phytoplankton can tolerate extremely cold temperatures and high salinities that occur in brine channels when sea ice forms in polar oceans. Some examples are diatoms like Fragilariopsis cylindrus , Nitzchia lecointeii , Entomoneis kjellmanii , Nitzchia stellata , Thalassiosira australis , Berkelaya adeliense , and Navicula glaciei . [ 19 ] [ 20 ] [ 21 ]
Penicillium is a genus of fungi found in a wide range of environments including extreme cold. [ 22 ]
Among the psychrophile insects, the Grylloblattidae or ice crawlers, found on mountaintops, have optimal temperatures between 1–4 °C. [ 23 ] The wingless midge (Chironomidae) Belgica antarctica can tolerate salt, being frozen and strong ultraviolet, and has the smallest known genome of any insect. The small genome , of 99 million base pairs , is thought to be adaptive to extreme environments. [ 24 ]
Psychrotrophic microbes are able to grow at temperatures below 7 °C (44.6 °F), but have better growth rates at higher temperatures. Psychrotrophic bacteria and fungi are able to grow at refrigeration temperatures, and can be responsible for food spoilage and as foodborne pathogens such as Yersinia . They provide an estimation of the product's shelf life, but also they can be found in soils, [ 25 ] in surface and deep sea waters, [ 26 ] in Antarctic ecosystems, [ 27 ] and in foods. [ 28 ]
Psychrotrophic bacteria are of particular concern to the dairy industry . [ 29 ] [ self-published source? ] Most are killed by pasteurization ; however, they can be present in milk as post-pasteurization contaminants due to less than adequate sanitation practices. According to the Food Science Department at Cornell University , psychrotrophs are bacteria capable of growth at temperatures at or less than 7 °C (44.6 °F). At freezing temperatures, growth of psychrotrophic bacteria becomes negligible or virtually stops. [ 30 ]
All three subunits of the RecBCD enzyme are essential for physiological activities of the enzyme in the Antarctic Pseudomonas syringae , namely, repairing of DNA damage and supporting the growth at low temperature. The RecBCD enzymes are exchangeable between the psychrophilic P. syringae and the mesophilic E. coli when provided with the entire protein complex from same species. However, the RecBC proteins (RecBCPs and RecBCEc) of the two bacteria are not equivalent; the RecBCEc is proficient in DNA recombination and repair, and supports the growth of P. syringae at low temperature, while RecBCPs is insufficient for these functions. Finally, both helicase and nuclease activity of the RecBCDPs are although important for DNA repair and growth of P. syringae at low temperature, the RecB-nuclease activity is not essential in vivo. [ 31 ]
Microscopic algae that can tolerate extremely cold temperatures can survive in snow, ice, and very cold seawater. On snow, cold-tolerant algae can bloom on the snow surface covering land, glaciers, or sea ice when there is sufficient light. These snow algae darken the surface of the snow and can contribute to snow melt. [ 18 ] In seawater, phytoplankton that can tolerate both very high salinities and very cold temperatures are able to live in sea ice. One example of a psychrophilic phytoplankton species is the ice-associated diatom Fragilariopsis cylindrus . [ 19 ] Phytoplankton living in the cold ocean waters near Antarctica often have very high protein content, containing some of the highest concentrations ever measured of enzymes like Rubisco . [ 20 ]
Insects that are psychrotrophic can survive cold temperatures through several general mechanisms (unlike opportunistic and chill susceptible insects): (1) chill tolerance, (2) freeze avoidance, and (3) freeze tolerance. [ 32 ] Chill tolerant insects succumb to freezing temperatures after prolonged exposure to mild or moderate freezing temperatures. [ 33 ] Freeze avoiding insects can survive extended periods of time at sub-freezing temperatures in a supercooled state, but die at their supercooling point. [ 33 ] Freeze tolerant insects can survive ice crystal formation within their body at sub-freezing temperatures. [ 33 ] Freeze tolerance within insects is argued to be on a continuum, with some insect species exhibiting partial (e.g., Tipula paludosa , [ 34 ] Hemideina thoracica [ 35 ] ), moderate (e.g., Cryptocercus punctulatus [ 36 ] ), and strong freezing tolerance (e.g., Eurosta solidaginis [ 37 ] and Syrphus ribesii [ 38 ] ) , and other insect species exhibiting freezing tolerance with low supercooling point (e.g., Pytho deplanatus [ 39 ] ). [ 32 ]
In 1940, ZoBell and Conn stated that they had never encountered "true psychrophiles" or organisms that grow best at relatively low temperatures. [ 40 ] In 1958, J. L. Ingraham supported this by concluding that there are very few or possibly no bacteria that fit the textbook definitions of psychrophiles. Richard Y. Morita emphasizes this by using the term psychrotroph to describe organisms that do not meet the definition of psychrophiles. The confusion between the terms psychrotrophs and psychrophiles was started because investigators were unaware of the thermolability of psychrophilic organisms at the laboratory temperatures. Due to this, early investigators did not determine the cardinal temperatures for their isolates. [ 41 ]
The similarity between these two is that they are both capable of growing at zero, but optimum and upper temperature limits for the growth are lower for psychrophiles compared to psychrotrophs. [ 42 ] Psychrophiles are also more often isolated from permanently cold habitats compared to psychrotrophs. Although psychrophilic enzymes remain under-used because the cost of production and processing at low temperatures is higher than for the commercial enzymes that are presently in use, the attention and resurgence of research interest in psychrophiles and psychrotrophs will be a contributor to the betterment of the environment and the desire to conserve energy. [ 42 ] | https://en.wikipedia.org/wiki/Psychrophile |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.